DEV Community

Cover image for Infrastructure with Terraform – Tweeting from a lambda
Antonio Feregrino
Antonio Feregrino

Posted on • Updated on

Infrastructure with Terraform – Tweeting from a lambda

Since we are working in AWS with a lambda we need to create infrastructure in there.

As a programmer I like to define everything in code, however infrastructure provisioning is something that until recently needed to be managed manually – either through a graphical interface or a CLI with limited scripting capabilities.

Over the years, tools have emerged that brought us closer to the dream of being able to create infrastructure just by defining it in code, tools such as Ansible, CloudFormation and Terraform allow us to do just that. And it is precisely the last one that I chose to create the necessary elements for this series of posts.

It is not my interest to explain to you how Terraform works (I don't even know properly myself, in this post I did the minimum for the lambda to work). The way I present this post is by describing the content of the terraform/ file that will contain the infrastructure.


Terraform interacts with remote systems (such as AWS) through plugins; these plugins are known as providers.

Each terraform module must specify the providers it needs via the block required_providers, each provider has a name, a location, and a version. For example, in the lambda example that I am going to post, I am using 2 providers:

  • aws, which exists in hashicorp/aws any version adhering to 3.27.X will work
  • null, it is an special provider, I'll tell you more about it later.
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.27"

    null = {
      version = "~> 3.0.0"
  required_version = ">= 0.14.9"

  backend "s3" {
    bucket = "feregrino-terraform-states"
    key    = "lambda-cycles-final"
    region = "eu-west-1"
Enter fullscreen mode Exit fullscreen mode

Backend configuration

Within the terraform configuration block you can also see that there is another block defined as backend "s3", this block helps us specify where the state file will be located, in this file we keep the state of the infrastructure that we have created with terraform so far. As I discussed in the first post of the series, this file will exist in an S3 bucket, the specification of which we put in the backend block.

Provider configuration

Some providers require extra configuration, for example, AWS requires us to configure things like the region we want to connect to, the profile and the credentials we are going to use. Although the recommendation is that you do not put passwords or secrets in code, for example, in the AWS configuration I have:

provider "aws" {
  profile = "default"
  region  = "eu-west-1"
Enter fullscreen mode Exit fullscreen mode

Data Sources

Terraform allows us to access data defined outside our configuration files, through data blocks, through these we can access information about the user who is executing commands in AWS, using aws_caller_identity:

data "aws_caller_identity" "current_identity" {}
Enter fullscreen mode Exit fullscreen mode

Local Values

I like to think of local values ​​as variables within each module, and we must define them within a locals block; locals can also take values ​​from other sources, such as variables or data sources to simplify access to them:

locals {
  account_id          = data.aws_caller_identity.current_identity.account_id
  prefix              = "lambda-cycles-final"
  ecr_repository_name = "${local.prefix}-image-repo"
  region              = "eu-west-1"
  ecr_image_tag       = "latest"
Enter fullscreen mode Exit fullscreen mode



Given the nature of the service I am trying to deploy, it is necessary to access the secrets stored in the AWS secret manager, these must be specified as data sources, with data blocks, in the case of secrets, it is necessary to access the secret with aws_secretsmanager_secret and then to the latest version of it with aws_secretsmanager_secret_version:

data "aws_secretsmanager_secret" "twitter_secrets" {
  arn = "arn:aws:secretsmanager:${local.region}:${local.account_id}:secret:lambda/cycles/twitter-2GMvKu"

data "aws_secretsmanager_secret_version" "current_twitter_secrets" {
  secret_id =
Enter fullscreen mode Exit fullscreen mode

ECR repository

As the lambda is going to be deployed using a docker container it is necessary to create a repository in ECR, we can use the aws_ecr_repository resource by specifying the repository name from one of the local variables:

resource "aws_ecr_repository" "lambda_image" {
  name                 = local.ecr_repository_name
  image_tag_mutability = "MUTABLE"

  image_scanning_configuration {
    scan_on_push = false
Enter fullscreen mode Exit fullscreen mode

Creating a Docker image

Once the repository is created, it is necessary to upload an image to it, however Terraform is used to define infrastructure, not to perform tasks such as building a docker image, much less uploading it. I am going to assume that for this step, before executing the Terraform I already have an image built with the name lambda-cycles, the only thing that would be missing then is uploading it to the ECR repository.

We can use a little hack to accomplish this with Terraform by using a null resource (null_resource) and a called provider local-exec that allows you to specify commands to be executed on the local computer:

resource "null_resource" "ecr_image" {
  triggers = {
    python_file_1 = filemd5("../")
    python_file_2 = filemd5("../")
    python_file_3 = filemd5("../")
    python_file_4 = filemd5("../")
    requirements  = filemd5("../requirements.txt")
    docker_file   = filemd5("../Dockerfile")
  provisioner "local-exec" {
    command = <<EOF
           aws ecr get-login-password --region ${local.region} | docker login --username AWS --password-stdin ${local.account_id}.dkr.ecr.${local.region}
           docker tag lambda-cycles ${aws_ecr_repository.lambda_image.repository_url}:${local.ecr_image_tag}
           docker push ${aws_ecr_repository.lambda_image.repository_url}:${local.ecr_image_tag}
Enter fullscreen mode Exit fullscreen mode

Did you notice the triggers block? this block will help us track changes to files that will determine if the lambda container has changed; with filemd5 we get a hash of the specified files. This would mean that if we make any changes to the .py files the Docker image to be rebuilt and uploaded to the ECR repository.

Image information

It is necessary to generate a data source (in the form of a aws_ecr_image) that specifies a dependency on the creation and publication of the image, we can do this thanks to depends_on:

data "aws_ecr_image" "lambda_image" {
  depends_on = [
  repository_name = local.ecr_repository_name
  image_tag       = local.ecr_image_tag
Enter fullscreen mode Exit fullscreen mode

Policies and permissions

Before creating the lambda, I have to take care of other administrative tasks, the first is to create a role the lambda can assume to be executed:

resource "aws_iam_role" "lambda" {
  name               = "${local.prefix}-lambda-role"
  assume_role_policy = <<EOF
   "Version": "2012-10-17",
   "Statement": [
           "Action": "sts:AssumeRole",
           "Principal": {
               "Service": ""
           "Effect": "Allow"
Enter fullscreen mode Exit fullscreen mode

Now, since I want to monitor my lambda, and to know if any errors occurred during its execution, it is necessary to grant it permissions so that it can create logs in CloudWatch:

data "aws_iam_policy_document" "lambda" {
  statement {
    actions = [
    effect    = "Allow"
    resources = ["*"]
    sid       = "CreateCloudWatchLogs"

resource "aws_iam_policy" "lambda" {
  name   = "${local.prefix}-lambda-policy"
  path   = "/"
  policy = data.aws_iam_policy_document.lambda.json
Enter fullscreen mode Exit fullscreen mode

Lambda – at least

Now that I have almost everything in place, I can create the lambda via the aws_lambda_function resource, this is one of the more convoluted definitions in this tutorial, so I'll try to explain it a bit more in detail:

The first thing I do is add a dependency to my docker image build with depends_on, then I specify the name of the lambda and the role it should assume with function_name and role. I know in advance that this lambda can take a bit of time so I'll leave it timeout a bit high.

Once we create our image in ECR we must tell the lambda that the package_typeis an image, followed by the image_uriso that it knows where to find it.

Once we create our image in ECR we must tell the lambda that the package_type is an image, followed by the image_uri so that it knows where to find it.

Finally, since my lambda is going to send a Tweet, it is necessary to pass the necessary secrets to it. Again, in the interest of keeping everything as private as possible, we will have to define them as environment variables (instead of hardcoding them); I achieve this from the block environment and extracting the secrets from –yeah, it is repetitivw– the secrets previously stored in AWS:

resource "aws_lambda_function" "lambda" {
  depends_on = [
  function_name = "${local.prefix}-lambda"
  role          = aws_iam_role.lambda.arn
  timeout       = 300
  image_uri     = "${aws_ecr_repository.lambda_image.repository_url}@${}"
  package_type  = "Image"
  environment {
    variables = {
      API_KEY             = jsondecode(data.aws_secretsmanager_secret_version.current_twitter_secrets.secret_string)["API_KEY"]
      API_SECRET          = jsondecode(data.aws_secretsmanager_secret_version.current_twitter_secrets.secret_string)["API_SECRET"]
      ACCESS_TOKEN        = jsondecode(data.aws_secretsmanager_secret_version.current_twitter_secrets.secret_string)["ACCESS_TOKEN"]
      ACCESS_TOKEN_SECRET = jsondecode(data.aws_secretsmanager_secret_version.current_twitter_secrets.secret_string)["ACCESS_TOKEN_SECRET"]
Enter fullscreen mode Exit fullscreen mode

Running every X minutes

So far so good, if you run terraform up to this point we woill have created several resources: an ECR repository, a docker image, and a lambda. But the icing on the cake is missing, and that is that the point of turning the code into a lambda; I want to run it multiple times throughout the day, every so often.

To achieve this task, I can use a trigger with the AWS CloudWatch service, something that executes my lambda at time intervals defined by me, this is possible with Terraform as well.

The first thing is to define an event rule in CloudWatch:

resource "aws_cloudwatch_event_rule" "every_x_minutes" {
  name                = "${local.prefix}-event-rule-lambda"
  description         = "Fires every 20 minutes"
  schedule_expression = "cron(0/20 * * * ? *)"
Enter fullscreen mode Exit fullscreen mode

This event needs a target, in this case it's my lambda:

resource "aws_cloudwatch_event_target" "trigger_every_x_minutes" {
  rule      =
  target_id = "lambda"
  arn       = aws_lambda_function.lambda.arn
Enter fullscreen mode Exit fullscreen mode

And of course, like almost everything in AWS, we also need to grant it permissions so that the event can invoke the lambda:

resource "aws_lambda_permission" "allow_cloudwatch_to_call_lambda" {
  statement_id  = "AllowExecutionFromCloudWatch"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.lambda.function_name
  principal     = ""
  source_arn    = aws_cloudwatch_event_rule.every_x_minutes.arn
Enter fullscreen mode Exit fullscreen mode

et voilà ! – we already have all the necessary ingredients to run and create our lambda using Terraform.

Remember, all of this content exists in the terraform/ file within the repository we've been working on.

This is how the repository looks like at this point.

Remember that you can find me on Twitter at @feregri_no to ask me about this post – if something is not so clear or you found a typo. The final code for this series is on GitHub and the account tweeting the status of the bike network is @CyclesLondon.

Top comments (0)