DEV Community

Building serverless architectures with AWS Lambda

AWS Lambda

AWS Lambda is a compute service that lets you run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning, automatic scaling, and logging.

You configure your Lambda function with the runtime language you prefer, the amount of memory that the function needs, and the maximum length of function timeout. The amount of memory determines the amount of virtual CPU and network bandwidth. Currently, the minimum and maximum amount of memory that can be allocated is 128MB and 10,240MB respectively. A Lambda function can’t exceed 15 minutes in duration, so 15 minutes is the maximum timeout setting. This is an AWS hard limit and can’t be changed.

You create code for the function and upload the code using a deployment package. Lambda supports two types of deployment packages: container images and .zip file archives. The Lambda service invokes the function when an event occurs. Lambda runs multiple instances of your function in parallel, governed by concurrency and scaling limits. You only pay for the compute time that you consume—there is no charge when your code isn’t running.

lambda

A Lambda function can run inside a VPC owned by the AWS Lambda service or in an Amazon CloudFront regional cache.

When you create a Lambda function, you deploy it to the AWS Region where you want your Lambda function to run. When a Lambda function is invoked, the Lambda service instantiates an isolated Firecracker virtual machine (VM) on an Amazon Elastic Compute Cloud (Amazon EC2) instance in the Lambda service VPC.

Lambda@Edge is an extension of AWS Lambda, a compute service that lets you run functions that customize the content that CloudFront delivers. You can author Node.js or Python functions in one Region, US East (N. Virginia), and then run them in AWS regional edge locations globally that are closer to the viewer. Processing requests at AWS locations closer to the viewer instead of on origin servers significantly reduces latency and improves the user experience.

An example of a Lambda@Edge use case is a retail website that sells bags. If you use cookies to indicate which color a user chose for a small bag, a Lambda function can change the request so that CloudFront returns the image of a bag in the selected color.

Lambda@edge

It is a serverless computing service that allows you to run AWS Lambda functions at AWS Edge locations. It integrates with Amazon CloudFront to run application code closer to your customers, to improve performance and reduce latency.

Key Features

  • Executing code closer to your customer reduces latency and improves performance.
  • You scale your application and be move available for customer around the globe.
  • You can modify request and response behavior for web applications in real-time, which make you able to customize content delivery
  • You can runs in a secure and isolated environment, supporting custom authentication and authorization, which is considered a secure execution for your code.

Use Cases

  • Dynamic Content Personalization: you can tailor content based on user attributes (like: language, location).
  • A/B Testing: serve different versions of content to different user groups for testing.
  • Access Control: you are able to implement custom authentication for web content.
  • SEO Optimization: you can modify URLs and headers for search engine optimization.

How It Works

  • Trigger Points: Runs in response to events generated by CloudFront, such as viewer request, viewer response, origin request, and origin response.
  • Deployment: Deploy code to AWS regions, and Lambda@Edge replicates it to edge locations globally.

Connecting a Lambda function to your VPC

Connecting a Lambda function to your VPC

Sometimes you might have a requirement to implement an architecture that has serverless components and components running in your own VPC. When designing this type of architecture, pay close attention to scaling as some components can cause a bottleneck.

By default, a Lambda function isn't connected to VPCs in your account. If your Lambda function needs to access the resources in your account VPC, you can configure the function to connect to your VPC. The Lambda service provides managed resources named Hyperplane elastic network interfaces (ENIs) which are created when the function is configured to connect to a VPC. When invoked, the Lambda function in the Lambda VPC connects to an ENI in your account VPC. Hyperplane ENIs provide NAT capabilities from the Lambda VPC to your account VPC using VPC-to-VPC NAT (V2N). V2N provides connectivity from the Lambda VPC to your account VPC, but it doesn’t in the other direction.

When you connect a function to a VPC in your account, the function can't access the internet unless your VPC provides access. To give your function access to the internet, route outbound traffic to a NAT gateway in a public subnet. The NAT gateway has a public IP address and can connect to the internet through the VPC's internet gateway.

In the example above, the database and the Amazon EC2 application instance can cause bottlenecks if the Lambda functions aggressively scale. Lambda functions 1 and 2 connect to an EC2 application instance and an Amazon Relational Database Service (Amazon RDS) proxy deployed in a private subnet in the customer’s VPC using the VPC-to-VPC NAT and the ENI. To scale the EC2 instance, deploy it behind an application load balancer in an Amazon EC2 Auto Scaling group.

To scale the database, you can use Amazon RDS proxy that manages a connection pool to the Amazon RDS database. Because Lambda functions can scale rapidly, the connections to an Amazon RDS database can be saturated. Lambda functions that rapidly open and close database connections can cause the database to fall behind. When no more connections are available the function will produce an error. When using MySQL and Aurora Amazon RDS databases, you can solve this challenge with RDS proxy. The Lambda functions connect to RDS proxy, which will have an open connection to the database ready to be used.

Here is a summary of what we discussed:

  • AWS Lambda is a serverless compute service that runs code without server management. It automatically scales, and you only pay for the compute time used. Functions can run in a VPC and are triggered by events.
  • Lambda@Edge extends Lambda to AWS edge locations for reduced latency, useful for content personalization and improving performance.
  • To access resources in your VPC, Lambda uses Hyperplane ENIs. For scaling, RDS Proxy helps manage database connections, preventing bottlenecks during rapid function scaling.

In the next article we will take about "Identifying Lambda serverless scenarios" and "How to invoke lambda functions"

Top comments (0)