This article is part of the AWS Community ASEAN content. All presentations and ready-to-deploy codes used in this talk are available at AWS Community ASEAN Content Repo.
Beside containers, serverless technology is something I consider an integral part of modern application development. When I ask developers about "what is serverless?", the perception mostly refers to the definition of "no server". While this is not wrong, it is not entirely true.
Serverless is an operating model.
With serverless, we delegate various activities that do not provide value to our business to other parties. By delegating these things, we can focus on improving business logic and not worrying about infrastructure. At the end, serverless enables us to have less code, less liability, better integration, better application, more focus, more values to deliver. These are what I think the real values for developers.
Looking back on some of the past speaking engagements, I often get the opportunity to explain how we can build a particular solution using the AWS serverless services. Recently, I've realized that I haven't done much at explaining the "why" or various logical reasons for using serverless. And finally I decided to make this content.
Now that we've set the context, let's dive "why" using AWS serverless services might be a good approach for you:
The main reason why using serverless gives you an advantage is that there is no server management. This means that there is no need for patching, retiring aging hardware, and performing day-to-day operations or maintenance.
The most classic example is AWS Lambda. AWS Lambda is a service where we can run a function without the need to manage hardware. At minimum, all we need to do is upload or write our business logic to AWS Lambda.
Like normal functions, we can integrate with various databases, or interact with HTTP API endpoints. To trigger, we can integrate with a variety of other AWS services — such as HTTP API requests with Amazon API Gateway, or when an object is uploaded into Amazon S3.
For those who use containers, you can also use AWS Fargate — a serverless containers compute engine — which you can use to run your apps. Both of these options free you from provisioning to maintaingin your servers, and you can allocate focus, effort and time to what is more important: building features for your application.
One of the misconceptions about serverless is that serverless is synonymous with AWS Lambda. The fact is, AWS Lambda is a just one layer of AWS serverless services.
If we refer to serverless as an operational model, in practice serverless can be applied to any layer. For example, for computing, we can use AWS Lambda and AWS Fargate. For databases, we can use Amazon DynamoDB as well as Amazon Aurora Serverless. For integration from service to service, we can use Amazon EventBridge, and so on.
Another thing that is not less important is that each serverless service is rich in features that are difficult for us to replicate if we build it manually. Take for example when you want to monetize your API. You need to build a limit for different tiers, for example a maximum of 100 RPS with a limit of 5,000 requests per day for plan A, and 200 RPS with a limit of 10,000 requests per day for plan B. You can use the usage plan feature that you can use, directly within Amazon API Gateway.
The point is that by using the features of the serverless service, you can get your apps to market faster.
But even though we can deliver our apps faster to the market, we certainly need to think about security. Before the era of cloud computing, this was an option for me, to deliver apps quickly, or prioritize security which usually slows down apps delivery. Now these two are not an option, and we can do it at the same time.
The diagram below describes how we can maintain security of AWS Lambda to access Amazon CloudWatch and Amazon DynamoDB using the IAM Role.
Although your code runs fine, if you don't have an IAM role to access DynamoDB, it won't work. This is one example where we can run applications more secure.
Before the era of cloud computing, to run services and apps, I needed a fixed number of servers, and of course this approach is not effective and cost efficient. Cloud computing changed all of this by introducing auto scaling. There are two types of scaling, namely horizontal and vertical scaling scaling. Horizontal scaling basically means adding a number of compute engines to your fleet. Vertical scaling is more about adding power, such as CPU or RAM to the existing machines in your pool.
With serverless, the effort expended on handling requests is made easier by auto scaling. Here I take an example of auto scaling for a database using Amazon DynamoDB.
Let's be straight here, database workloads are difficult to predict and database scaling is a very challenging thing. Most of the startups I used to be involved with were in the Media and Entertainment industry. Read consumption for the database is higher between 8-10am, 1-3pm, and 6-9pm and handles approximately 1 million requests during those hours. Outside of those hours, the requests are significantly reduced. In other words, the capacity of a database to handle requests is not the same as a high traffic hours.
Then, I gave myself another thought on database. When I switched from self-managed MySQL to Amazon DynamoDB, I benefited from auto scaling. I can define the scaling configuration for read capacity, and different configuration for write capacity.
The image below is an illustration of how auto scaling from Amazon DynamoDB works.
The red line is provisioned capacity and the blue line indicates actual consumption based on item size. Here we can see that provisioned capacity adjusts the utilization rate to support application performance. In addition, this also reduces the cost that I need to pay because capacity is not fixed all the time, and adjusts to requests.
Serverless services on AWS are built with a specific purpose — Amazon DynamoDB as database, Amazon S3 as object storage, Amazon EventBridge as event bus. I'd like to think them somewhat like a Lego, building blocks. This provides flexibility for developers to build architectures using appropriate services.
Due to its modular characteristics, it makes it easier to build functionality for an application. We can be more flexible on building applications from very simple to sophisticated architecture.
"But if we are adopting various services, that also means we also need to work on more integration." That's right. The more services that we use, the more integration is needed.
Fortunately, AWS serverless services have seamless integration, which means we can integrate easily because the integration features are built-in. Take the example here on AWS Lambda, you can easily do the integration by configuring them in the dashboard.
We can also find the same thing in other serverless services, for example AWS Step Functions — one of my favorite services because it can easily create visual workflows for distributed applications. With AWS Step Functions, we can define workflow flows and seamless integration with AWS Lambda by simply defining the ARNs from AWS Lambda.
Below is an example of how we can implement distributed transactions using state machine with AWS Step Functions using Amazon ECS and AWS Fargate.
AWS Lambda was first launched in late 2014 as an event-driven function. Triggering by event was something new and a bit unusual at the time, but as we move our workloads into cloud, the adoption of leveraging event increased.
Event itself is not something entirely new. Events can take any form, from changes in the state of a system — such as when data is entered into a database — to custom events — such as in the context of e-commerce, when a customer places an order.
By leveraging events, our applications can now respond to various kinds of events generated by the system, or the services we have. For example, below is a near real-time response by AWS Lambda to data changes that occur in Amazon DynamoDB. This feature is called DynamoDB Streams, where our system now has the ability to capture a time-ordered sequence of item-level modifications in any DynamoDB table.
Leveraging events is becoming more adopted in recent years, and formally defined in Event-driven architecture. Event-driven architecture is a paradigm in building architecture by using production and consumption to respond to an event.
Another example how serverless could help you is the implementation of choreography in microservices using Amazon EventBridge as a serverless event bus. By using Amazon EventBridge, we can easily define rules to route to which service we need to send data. This gives us the advantage of decoupling between services so they can run their processes independently and scale according to requests.
Below is an example on how you can build choreography for microservices in context of e-commerce application.
At this point, you already understand some of the reasons to consider whether serverless can be your development system approach. Serverless architecture can be used in almost all cases, starting from building a simple serverless API, IT automation, streaming data processing to implementing microservices.
Although this requires a change in perspective from developing with a traditional approach, but from my experience coupled with the development of features for AWS serverless services, the serverless approach will provide significant advantages in the long run.
And also, you can use presentations and ready-to-use code for this talk. All materials have been collected for you on the AWS Community ASEAN Content Repo.