DEV Community

Pedram Hamidehkhan
Pedram Hamidehkhan

Posted on

Solving the Cold Start Challenge for Lambda Function Using Terraform Cloud

In this post we are going to properly address the cold starts for Lambda Functions. In a later post we will create a private module which facilitates consumption of this module.

Start in an empty directory and create the following files:

    "main.tf"
    "variables.tf"
    "output.tf"
    "terraform.auto.tfvars"
Enter fullscreen mode Exit fullscreen mode

In the "main.tf" file add the following to it.

    provider "aws" {
        region = "eu-central-1"
    }
Enter fullscreen mode Exit fullscreen mode

One quick note: please NEVER add your credentials here! Ideally, store credentials as environment variables in Terraform Cloud.
Now open the terminal and execute the "terraform init" command. This will initialize our directory with the AWS provider.
To deploy the lambda function, we will need to upload the packaged code to S3. So I am also leveraging the random provider to make sure that the S3 Bucket is unique.

    resource "random_pet" "lambda_bucket_name" {
      prefix = "test"
      length = 4
    }


    resource "aws_s3_bucket" "lambda_bucket" {
      bucket = random_pet.lambda_bucket_name.id
      acl    = "private"
        tags = {
        "env" = "test"
      }
    }
Enter fullscreen mode Exit fullscreen mode

The two resources above will just create a random bucket.
The following command will allow me to package the code before uploading it to S3 (I prefer to avoid the local executioner whenever possible):

    data "archive_file" "lambdaFunc_lambda_bucket" {
      type = "zip"

      source_dir  = var.src_path
      output_path = var.target_path
    }
Enter fullscreen mode Exit fullscreen mode

After that I create the bucket for my deployment artifact:

    resource "aws_s3_bucket_object" "lambdaFunc_lambda_bucket" {
      bucket = aws_s3_bucket.lambda_bucket.id

      key    = var.target_path
      source = data.archive_file.lambdaFunc_lambda_bucket.output_path

      etag = filemd5(data.archive_file.lambdaFunc_lambda_bucket.output_path)
        tags = {
        "env" = "test"
      }
    }
Enter fullscreen mode Exit fullscreen mode

Now things get more interesting. When you deploy a lambda function, you can specify a few parameters that are relevant when you want to deploy in production. One is of course, the reserved concurrency. AWS limits the number of concurrent executions per account and per region.

Image description

Therefore for having predictability for our concurrency and not having our Lambda function throttled by other lambdas, we set this parameter.

    resource "aws_lambda_function" "lambdaFunc" {
      function_name = var.function_name

      s3_bucket = aws_s3_bucket.lambda_bucket.id
      s3_key    = aws_s3_bucket_object.lambdaFunc_lambda_bucket.key

      runtime = var.lambda_runtime
      handler = var.handler

      source_code_hash = data.archive_file.lambdaFunc_lambda_bucket.output_base64sha256

      role                           = aws_iam_role.lambda_exec.arn
      reserved_concurrent_executions = var.concurrent_executions
      tags = {
        "env" = "test"
      }
    }    
Enter fullscreen mode Exit fullscreen mode

Now we get to the most important port, Cold Start. As officially suggested by AWS, I will be leveraging provisioned concurrency to deploy the lambda function. There are many solutions which use aws cloud watch as rules, but I really believe that provisioned concurrency is much simpler and comfortable to use. Moreover, in order to use the provisioned concurrency, we will need to create an Alias. This Alias can also serve as a way to do Blue Green or Canary deployments, which we will hopefully cover in a future post.

    resource "aws_lambda_alias" "con_lambda_alias" {
      name             = "lambda_alias"
      description      = "for blue green deployments OR for concurrency"
      function_name    = aws_lambda_function.lambdaFunc.arn
      function_version = var.function_version
    }

    resource "aws_lambda_provisioned_concurrency_config" "config" {
      function_name                     = aws_lambda_alias.con_lambda_alias.function_name
      provisioned_concurrent_executions = var.provisioned_concurrent_executions
      qualifier                         = aws_lambda_alias.con_lambda_alias.name
    }
Enter fullscreen mode Exit fullscreen mode

Please note that I used variables as the value for these parameters. These variables have defaults. Feel free to change them as your use case requires. Also please be aware that the provisioned concurrency is going to cost you as AWS keeps a warm instance of your application somewhere.

Lastly, we need a basic policy for our lambda function.

    resource "aws_iam_role" "lambda_exec" {
      name = "serverless_lambda"
      assume_role_policy = jsonencode(
        {
          "Version" : "2012-10-17",
          "Statement" : [
            {
              "Effect" : "Allow",
              "Principal" : {
                "Service" : "lambda.amazonaws.com"
              },
              "Action" : "sts:AssumeRole"
            }
          ]
        }
      )
    }
Enter fullscreen mode Exit fullscreen mode

In the "variables.tf" file add the following to it:

    variable "function_name" {
      type    = string
      default = "test-function"
    }

    variable "src_path" {
      type = string
    }

    variable "target_path" {
      type = string
    }

    variable "lambda_runtime" {
      type    = string
      default = "nodejs12.x"
    }

    variable "handler" {
      type    = string
      default = "index.handler"
    }

    variable "region" {
      type    = string
      default = "eu-central-1"
    }

    variable "concurrent_executions" {
      type    = string
      default = "1"
    }

    variable "provisioned_concurrent_executions" {
      type    = string
      default = "1"
    }

    variable "function_version" {
      type    = string
      default = "1"
    }
Enter fullscreen mode Exit fullscreen mode

And in the "terraform.auto.tfvars" file add the following content:

    src_path    = "./src"  //the path to the source code for lambdafunction
    target_path = "./artifacts/lambda_deployment.zip" //the path for the deployment artifact
Enter fullscreen mode Exit fullscreen mode

You can of course bring your own lambda function, but if you don't have one, create a folder called "src and a file in it called "index.handler" and add the following to it:

    exports.handler =  async (event) => {
      const payload = {
        date: new Date(),
        message: 'Terraform is awesome!'
      };
      return JSON.stringify(payload);
    };
Enter fullscreen mode Exit fullscreen mode

Now you can deploy the application using Terraform CLI or Terraform Cloud. For the CLI version, simply run

    "terraform apply -auto-approve"
Enter fullscreen mode Exit fullscreen mode

If you want to use Terraform Cloud to have "GitOps" use the following documentation:

https://www.hashicorp.com/resources/a-practitioner-s-guide-to-using-hashicorp-terraform-cloud-with-github

This problem might have been difficult to solve, but I can guarantee that it is much harder to communicate the solution to someone properly.
I will create another blog post explaining how you can optimally share this solution with someone inside your organization.

GitHub Repository: https://github.com/pedramha/terraform-aws-lambda
Youtube: https://www.youtube.com/watch?v=e0QplrqH0J4

Thank you!
Pedram

Top comments (0)