Overview
In this article, we'll deploy a simple Golang hello-world lambda function into AWS using Terraform and cover all the necessary steps to do that. All resources we deployed have a free tier and thus will cost us 0$.
If you want to not miss similar articles subscribe to receive the new articles updates in TheDevBook 📩.
AWS Lambda is a popular cloud computing service that also allows developers to run their code without the need to manage infrastructure. With AWS Lambda, you only pay for the computing resources you consume, making it a cost-effective solution for running your applications. For more details about AWS lambda check What is AWS lambda.
In this tutorial, we will be exploring how to deploy a Lambda function using Terraform, a powerful tool for provisioning infrastructure as code. Terraform is a popular open-source tool that enables you to define and manage your infrastructure as code. With Terraform, you can define and manage all the components of your infrastructure, including AWS Lambda functions, in a single file. This makes it easy to version control your infrastructure and manage changes over time.
Prerequisites
To deploy the lambda function we need to have an AWS account and set up Terraform first. A detailed guide on how to do that can be found in the guide in configure environment section.
Note: We are going to use a profile
tutorial-terraform-profile
with admin permissions, that is not the best way to use in production, because of security concerns, but convenient for test purpose.
We also need a Go installed for the tutorial to compile our go code. Go installation instructions can be found in official docs.
Structure of the project
The project structure contains multiple terraform files and a lambda directory with a code of hello-world
Go lambda function.
tutorial/
|-- lambda/
| `-- hello-world
| `-- main.go
|-- iam.tf
|-- lambda.tf
|-- locals.tf
`-- main.tf
Basic setup
First, we need to provide a basic setup of terraform, specify the dependencies we are going to use, and set up the AWS provider.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
archive = {
source = "hashicorp/archive"
}
null = {
source = "hashicorp/null"
}
}
required_version = ">= 1.3.7"
}
provider "aws" {
region = "us-east-1"
profile = "tutorial-terraform-profile"
default_tags {
tags = {
app = "tutorial-terraform"
}
}
}
The terraform
block defines the Terraform configuration itself and specifies the required Terraform providers and required terraform version, which is >= 1.3.7
. The required_providers
section lists the providers that will be used in the Terraform configuration. In the tutorial we need:
-
aws
- to create AWS resources; -
archive
- manage archive files, such as ZIP files, within Terraform; in our case, we need to archive Go binary in zip format to upload to lambda; -
null
- run local commands; in our case, we are going to build a go binary usinggo build
command.
The provider "aws"
block specifies the AWS provider settings, which include the AWS region (us-east-1
) and the AWS profile (tutorial-terraform-profile
) to use for authentication.
Note: You need to specify the appropriate profile you have in
.aws/config
for the terraform aws provider
The default_tags
section sets the default tags that will be applied to AWS resources created by Terraform, with the tag app = "tutorial-terraform"
in this case.
Tags mostly used for filter resources and track usage and costs
Identity and access management setup
// allow lambda service to assume (use) the role with such policy
data "aws_iam_policy_document" "assume_lambda_role" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
}
}
// create lambda role, that lambda function can assume (use)
resource "aws_iam_role" "lambda" {
name = "AssumeLambdaRole"
description = "Role for lambda to assume lambda"
assume_role_policy = data.aws_iam_policy_document.assume_lambda_role.json
}
We use assumeRole for principal lambda to allow lambdas this particular role. It means this role can be assumed only by lambdas.
data "aws_iam_policy_document" "allow_lambda_logging" {
statement {
effect = "Allow"
actions = [
"logs:CreateLogStream",
"logs:PutLogEvents",
]
resources = [
"arn:aws:logs:*:*:*",
]
}
}
// create a policy to allow writing into logs and create logs stream
resource "aws_iam_policy" "function_logging_policy" {
name = "AllowLambdaLoggingPolicy"
description = "Policy for lambda cloudwatch logging"
policy = data.aws_iam_policy_document.allow_lambda_logging.json
}
// attach policy to out created lambda role
resource "aws_iam_role_policy_attachment" "lambda_logging_policy_attachment" {
role = aws_iam_role.lambda.id
policy_arn = aws_iam_policy.function_logging_policy.arn
}
Go lambda function code
The hello world function for lambda in Go is a simple executable code in main package:
package main
import (
"context"
"fmt"
"github.com/aws/aws-lambda-go/lambda"
)
func HandleRequest(ctx context.Context, event interface{}) (string, error) {
fmt.Println("event", event)
return "Hello world", nil
}
func main() {
lambda.Start(HandleRequest)
}
We use github.com/aws/aws-lambda-go/lambda
package to make out main function be executed as lambda. We need to run in main
function lambda.Start(HandleRequest)
to process events that trigger the lambda. In the handler function HandleRequest
, we will receive the context of the event and the event itself.
Go lambda function resource
To deploy the AWS Lambda function we need to create a binary for our lambda and upload it in a zip format:
// build the binary for the lambda function in a specified path
resource "null_resource" "function_binary" {
provisioner "local-exec" {
command = "GOOS=linux GOARCH=amd64 CGO_ENABLED=0 GOFLAGS=-trimpath go build -mod=readonly -ldflags='-s -w' -o ${local.binary_path} ${local.src_path}"
}
}
// zip the binary, as we can use only zip files to AWS lambda
data "archive_file" "function_archive" {
depends_on = [null_resource.function_binary]
type = "zip"
source_file = local.binary_path
output_path = local.archive_path
}
// create the lambda function from zip file
resource "aws_lambda_function" "function" {
function_name = "hello-world"
description = "My first hello world function"
role = aws_iam_role.lambda.arn
handler = local.binary_name
memory_size = 128
filename = local.archive_path
source_code_hash = data.archive_file.function_archive.output_base64sha256
runtime = "go1.x"
}
We first compiled the binary using null_resource
to execute the local command and run Go build. After that, we will have a binary tf_generated/hello-world
.
After this, we need to archive our binary in zip as lambda accepts only zip archive uploads. For that we use archive_file
and specify file to our binary file.
To create the AWS Lambda function we need to use the aws_lambda_function
resource. The function's name, description, IAM role, handler, memory size, zip archive, and runtime are all specified in this step.
In addition to the Lambda function, we need to create a
CloudWatch Log Group, using the aws_cloudwatch_log_group resource, to gather logs from the deployed function. This allows you to monitor and debug the function's performance in a centralized manner.
// create log group in cloudwatch to gather logs of our lambda function
resource "aws_cloudwatch_log_group" "log_group" {
name = "/aws/lambda/${aws_lambda_function.function.function_name}"
retention_in_days = 7
}
Deploy Go lambda function
Before deploying the resources, we need to initialize terraform:
terraform init
After that, we are ready to plan our resources. That means we are not going to create them first, but only to validate out terraform code is correct and check the resources that will be created:
terraform plan
And we will see something like:
...
Terraform will perform the following actions:
# data.archive_file.function_archive will be read during apply
# (depends on a resource or a module with changes pending)
<= data "archive_file" "function_archive" {
+ id = (known after apply)
+ output_base64sha256 = (known after apply)
+ output_md5 = (known after apply)
+ output_path = "./tf_generated/hello-world.zip"
+ output_sha = (known after apply)
+ output_size = (known after apply)
+ source_file = "./tf_generated/hello-world"
+ type = "zip"
}
# aws_cloudwatch_log_group.log_group will be created
+ resource "aws_cloudwatch_log_group" "log_group" {
+ arn = (known after apply)
+ id = (known after apply)
...
Plan: 6 to add, 0 to change, 0 to destroy.
In total, we are going to create 6 resources:
- the binary file of compiled Go code
- zip archive of the binary
- iam policy for logs
- iam role for lambda
- iam policy attachment (that's a separate resource)
- lambda function
That looks correct and we can create our resources. We need to run:
terraform apply
and type yes
to confirm applied resources.
Note: we can use -auto-approve for auto approve, but it is more appropriate for automated runs in CI/CD pipeline. For manual deployments better to avoid this parameter to double-check the resources that are going to be created.
Validation of lambda function deployment
After deployment is ok, we can check the deployed lambda. We need to Go to lambda
section in the search of AWS console:
There we can find hello-world
function.
Going inside the function we can see the details of the lambda function version. To test it there is a tab called Test
:
We need to specify event-name
, event JSON
and click Test
to run the function:
After that, we see the execution summary. The function returns a "Hello world" message 🎉
In the execution summary, we can see the duration of the function run and the billing duration (for what time we are going to pay). There is a direct link to the logs of the function.
In the logs, we can see the event, that we passed to the function:
Destroy resources
To destroy all resources we need to run:
and type yes
to confirm.
Conclusion
We deployed an AWS Lambda function with Terraform. By following the steps in this tutorial, you'll be able to manage your Lambda functions with ease and ensure consistency across your infrastructure. And let's be real, who doesn't love a little automation in their life?
💡 If You Liked The Article
If you like the article and want to stay updated on the latest and greatest in software engineering, make sure to subscribe for new articles updates in TheDevBook.
Happy coding 💻
Top comments (0)