This blog post covers AWS Lambda custom container images basics, when to use them and how to build them with Docker & AWS CDK in Python.
This blog was originally published on my website https://www.ranthebuilder.cloud
When you create an AWS Lambda function, you provide the handler code and its inner modules (logic layer, database handlers, etc.). However, more often than none, your code relies on external dependencies. When your Lambda function runs, it runs on an AWS container base image. The AWS-provided container image might not contain all your external dependencies. In that case, you need to provide them yourself.
You have four options:
Build & upload a Lambda container image that you build yourself, which includes handler code and all external dependencies and upload it to the Amazon ECR repository.
Build & upload a ZIP file containing all Lambda code and the external dependencies per the AWS Lambda function. See code examples here.
Build & upload a ZIP file containing external dependencies as an AWS Lambda layer. See code examples and technical explanations here.
Use an existing Lambda layer ARN that contains the external dependencies.
In this blog, we will focus on option number one, building a Lambda container image.
My previous post focused on options three and four.
AWS Lambda container images provide a convenient way to package libraries and other dependencies you can use with your Lambda functions.
AWS Lambda supports images up to 10 GB in size, unlike the regular Lambda function's build methods that allow unzipped dependencies of only up to 250MB.
AWS CDK requires a Dockerfile to build the container image. Once built, it is uploaded to Amazon ECR (Elastic Container Registry), where it will be stored and used by the Lambda function during invocation.
AWS Lambda container images are a valid method of creating Lambda functions.
However, it's more complex than building Lambda layers or using CDK's built-in dependency ZIP creator as described in options 2–4 above.
In addition, container images introduce plenty of cons:
Compared to the alternatives (options 2–4 in Lambda building options mentioned above), you need to write a Docker file, which is not a great user experience.
Building an image takes considerably more time than the alternatives. The image contains the Lambda runtime, your handler code and its dependencies. The outcome will be much larger than the ZIP files you create in the alternatives.
You upload an image to Amazon ECR and pay for its storage, whereas in the other options, you don't.
Larger images take longer to build and upload, causing a longer time to production account than building a layer or a ZIP containing just Lambda code and dependencies.
I consider Lambda container images as a niche, and I believe you should use it only in cases where:
You want total control of the container image for security reasons or custom optimizations.
Your code has dependencies that exceed 250MB when unzipped. That's the main pro factor for using it. Simple as that.
Let's assume that our Serverless service uses poetry for dependency management and that the pyproject.toml file looks like this:
We want to bundle our Lambda function code and its dependencies (AWS Lambda Powertools, mypy-boto3-dynamodb, cachetools, and others) into a Lambda image container image and deploy it with AWS CDK.
AWS CDK will create a container image, upload it to Amazon ECR, the container repository, and set an AWS Lambda function to use as its container image.
You installed Docker.
You installed AWS CDK.
You use poetry as your Python dependency manager.
You use just one general pyproject.toml file for all the AWS Lambda functions. It includes all the requirements of all functions.
Before we write any CDK code, we need to prepare the dockerfile and Lambda requirements.txt file, which describes the required Lambda dependencies and their versions.
We will store all the required build artifacts in a new folder: the '.build' folder, which will not be part of the code base.
First, we need to create a requirements.txt file from our pyproject.toml.
In this example, we use poetry, but pipenv is an option too.
We generate a requirements.txt from the [tool.poetry.dependencies] section of the toml file.
Unlike the [tool.poetry.dev-dependencies] section in the toml, the libraries in the 'tool.poetry.dependencies' section are the libraries that the Lambda functions in the project require in runtime and must be uploaded to AWS.
Run the following command to create the '.build/ecr' folder and generate the requirements.txt file.
The generated file may look like this:
Now, we add the Lambda handler code. Let's assume that in our project, the Lambda function handlers reside in the project's root folder under the 'service' folder.
Before we build and deploy the Lambda functions to AWS, we copy the 'service' folder to the '.build/ecr' folder. The CDK code that creates the Lambda container image will take the Lambda handler code, requirements.txt, and Dockerfile from the '.build/ecr' folder.
Now, all that's left is to create a Dockerfile and copy it into the '.build/ecr' folder.
As mentioned on the official AWS docs, your container image has to implement the Lambda runtime API:
AWS provides a set of open-source base images that you can use to create your container image. These base images include a runtime interface client to manage the interaction between Lambda and your function code.
We will use a base container image and start building from it.
- Line 1 sets up the base image. We use an official AWS Python 3.9-supported image and use the latest version.
- Line 4 copies the requirements.txt into the root folder, while line 7 installs the Lambda dependencies into the Lambda task root folder. As specified here, the environment variable defines the folder where your code resides.
- Lines 9 and 10 copy the handler code and its inner modules, the 'service' folder, and its subfolders into the lambda code folder.
- Line 13 sets the Lambda function handler entry function. In my project, the service had an inner folder called 'handlers' where a 'create_order.py' resides with a function called 'create_order' inside as its entry point.
Now you should have the following '.build' folder structure:
I recommend you use a Makefile, or any simple script to automate these actions for you prior to the 'cdk deploy' command.
Now that everything is in place let's write a CDK construct that creates a Lambda container image and a Lambda function that is built according to the configuration in '.build/ecr' folder.
The AWS CDK construct 'MyConstruct' creates:
- Basic Lambda role (lines 14–20).
- Lambda function based on container image (lines 22–30). CDK looks for a requirements.txt and Dockerfile in the asset directory ('.build/ecr') and uses Docker to build the container image. Once built, it is uploaded to Amazon ECR.
In line 29, we specify the Lambda architecture as ARM64. Since I was using a Mac with an M1 chip, I had to set this parameter to ARM64. When I didn't specify it, the Lambda function failed to start with an endpoint error which didn't make any sense (thankfully stackoverflow was called to the rescue!).
Once deployed, head over to the AWS Lambda console and look for the 'image' tab.
As you can see, the image was uploaded to Amazon ECR and it has the ARM64 architecture.