ππΆ In the previous session, I shared how to develop a Lambda function that requests RDS to copy a snapshot from a source region to a target region. It was developed using Golang and AWS SDK to build data into the AWS API structure. Then, I used some programming logic to manage already replicated snapshots in the target region by copying only those that do not yet exist in the target region. You can read more here.
ππ€ͺ This session covers how to deploy the previous source code to real infrastructure on AWS using Terraform. I will demonstrate how to use Terraform to provide Infrastructure as Code and create a cloud service environment for serving the Lambda function from the requirements mentioned earlier.
AWS service is related
-
EventBridge
; event bus and event rules. -
IAM
; policy and roles. -
S3
; bucket and objects. -
Lambda
; functions and policy -
Cloudwatch
; log group -
KMS
; multi-region key
Preparations
Before develop a Terraform code, you should install the following.
- Terraform cli
- AWS cli
- Go bin
Get started develop
Define the Terraform information.
# versions.tf
terraform {
required_version = ">=0.12"
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.7.0"
}
}
}
Define AWS provider
# providers.tf
provider "aws" {
# Configuration options
region = local.region
}
To define local variables for use in this environment, create a main terraform file. Define variables that may be changed in other scenarios, and replace those values with resources from other files.
# main.tf
locals {
# Use in providers.tf
region = "ap-southeast-1"
# For S3
S3_prefix = "lambda-lake-"
# For Lambda components
lambda_func_name = "rds_copy_snap"
lambda_s3_key = "lambda-hello.zip"
handler_bin_name = "hello"
func_env_var = {
OPT_SRC_REGION = "ap-southeast-1"
OPT_TARGET_REGION = "ap-southeast-2"
OPT_BEDUG = "True"
OPT_DB_NAME = ""
OPT_OPTION_GROUP_NAME = ""
OPT_KMS_KEY_ID = ""
}
# For EventBridge
bus_name = "rds-bus" # Leave with `default` to use AWS service event bus.
rds_events = {
description = "Capture for RDS snapshot event"
event_pattern = jsonencode({
"source" : ["aws.rds", "demo.event"],
"detail" : {
"EventID" : ["RDS-EVENT-0042", "RDS-EVENT-0091"]
}
})
}
# Default tags in all resources created by this code.
tags = {
Owner = "watcharin"
Project = "local-dev"
}
}
# Use to get my AWS session information
data "aws_caller_identity" "current" {}
First, I created an S3 resource in a separate file. This resource provides a new S3 bucket to store compressed binary files from the built Go code and upload them into Lambda. To generate the bucket name, I defined a prefix and added a random suffix.
# s3-buckets.tf
module "s3_lambda" {
source = "terraform-aws-modules/s3-bucket/aws"
bucket_prefix = local.S3_prefix
create_bucket = true
block_public_acls = true
control_object_ownership = true
object_ownership = "ObjectWriter"
tags = local.tags
}
After created S3 bucket, I built Go binary and push into its.
# Use taskfile help a pipeline step. Or you can read a manual step in Taskfile.yml.
task build
task upload:lambda
Letβs create Lambda resource. I will provision an IAM role for assume privilege from function.
# lambda.tf
data "aws_iam_policy_document" "lambda_assume" {
statement {
effect = "Allow"
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
actions = [
"sts:AssumeRole"
]
}
}
resource "aws_iam_role" "lambda_iam_role" {
name = "iam_for_lambda"
assume_role_policy = data.aws_iam_policy_document.lambda_assume.json
}
Then, provide a policy document and attachment to Lambda role. This function will permission for access to AWS resource as Cloudwatch log group, KMS key, and RDS snapshot.
# lambda.tf
data "aws_iam_policy_document" "allow_logging" {
policy_id = "AllowLambdaPushLog"
statement {
effect = "Allow"
actions = [
"logs:CreateLogStream",
"logs:PutLogEvents",
]
resources = ["arn:aws:logs:*:*:*"]
}
}
data "aws_iam_policy_document" "allow_copy_rds_snapshot" {
policy_id = "AllowLambdaCopyRdsSnapshot"
statement {
effect = "Allow"
sid = "AllowLambdaAccessToRdsSnapshot"
actions = [
"rds:CopyDBSnapshot",
"rds:ModifyDBSnapshot",
"rds:DescribeDBSnapshots",
"rds:ModifyDBSnapshotAttribute"
]
resources = ["*"]
}
statement {
sid = "AllowLambdaAccessToKMSKey"
effect = "Allow"
actions = [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*"
]
resources = ["${aws_kms_key.rds_snap.arn}"]
}
}
resource "aws_iam_policy" "lambda_logging" {
name = "lambda-logging-cloudwatch"
policy = data.aws_iam_policy_document.allow_logging.json
}
resource "aws_iam_policy" "lambda_rds" {
name = "lambda-access-rds"
policy = data.aws_iam_policy_document.allow_copy_rds_snapshot.json
}
resource "aws_iam_role_policy_attachment" "func_log_policy" {
role = aws_iam_role.lambda_iam_role.id
policy_arn = aws_iam_policy.lambda_logging.arn
}
resource "aws_iam_role_policy_attachment" "func_rds" {
role = aws_iam_role.lambda_iam_role.id
policy_arn = aws_iam_policy.lambda_rds.arn
}
I define an information about the Lambda that I want to created. In this function, I define configuration as source file from S3
, memory size
, and env var
.
# lambda.tf
resource "aws_lambda_function" "rds_snap" {
function_name = local.lambda_func_name
description = "Func to handle event from EventBridge to request RDS copy snapshot."
role = aws_iam_role.lambda_iam_role.arn
handler = local.handler_bin_name
runtime = "go1.x"
s3_bucket = module.s3_lambda.s3_bucket_id
s3_key = local.lambda_s3_key
package_type = "Zip"
memory_size = 128
environment {
variables = local.func_env_var
}
tags = local.tags
}
We still cannot access this function from EventBridge because I did not provide a Lambda policy that allows invoking this function. You can see the Terraform code below.
# lambda.tf
resource "aws_lambda_permission" "allow_eventbridge" {
statement_id = "AllowExecutionFromEventBridge"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.rds_snap.function_name
principal = "events.amazonaws.com"
source_arn = module.eventbridge.eventbridge_rule_arns.orders
}
I have created a Cloudwatch log group to store activity logs for Lambda. However, to reduce costs for this lab, I have set the retention period for the logs to one day. (Note: You can adjust this setting as necessary.)
# cloudwatch-log-group.tf
resource "aws_cloudwatch_log_group" "lambda" {
name = "/aws/lambda/${local.lambda_func_name}"
retention_in_days = 1
lifecycle {
prevent_destroy = false
}
}
Create a KMS key that will be used by a Lambda function to encrypt a snapshot from a source region and transfer it to a target region.
data "aws_iam_policy_document" "kms" {
# Allow root user to full managed this key
statement {
effect = "Allow"
actions = [
"kms:*"
]
resources = ["*"]
principals {
type = "AWS"
identifiers = ["arn:aws:iam::${data.aws_caller_identity.current.account_id}:root"]
}
}
# Allow user for limited access key
statement {
effect = "Allow"
actions = [
"kms:CreateGrant",
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey",
]
resources = ["*"]
principals {
type = "AWS"
identifiers = ["${data.aws_caller_identity.current.arn}"]
}
}
}
resource "aws_kms_key" "rds_snap" {
description = "CMK for AWS RDS backup"
enable_key_rotation = true
policy = data.aws_iam_policy_document.kms.json
multi_region = true
}
resource "aws_kms_alias" "rds_snap" {
target_key_id = aws_kms_key.rds_snap.id
name = format("alias/%s", lower("START_RDS_SNAP"))
}
Finally, I provide EventBridge rules to filter the interested events and send them to a Lambda function.
Note: For event bus, If you need to tracking AWS services event, you will define with
default
.
# eventbridge.tf
module "eventbridge" {
source = "terraform-aws-modules/eventbridge/aws"
version = "2.3.0"
bus_name = local.bus_name
rules = {
orders = local.rds_events
}
targets = {
orders = [
{
name = "event-to-lambda"
arn = "${aws_lambda_function.rds_snap.arn}"
}
]
}
tags = local.tags
depends_on = [aws_lambda_function.rds_snap]
}
Once you finish developing the Terraform code, you can deploy it to your environment and wait for Terraform to create the necessary resources.
terraform init
terraform apply
How to prove it
You can simulate an event that you interesting.
- Goto AWS web console.
- Goto EventBridge >> Event buses
-
Click
Send events
and define event details like below. Then, the event source you will usedemo.event
that I define in the terraform and event pattern.
{ "EventCategories": ["creation"], "SourceType": "SNAPSHOT", "SourceArn": "arn:aws:rds:us-east-1:123456789012:snapshot:rds:snapshot-replica-2018-10-06-12-24", "Date": "2018-10-06T12:26:13.882Z", "SourceIdentifier": "rds:snapshot-replica-2018-10-06-12-24", "Message": "Automated snapshot created", "EventID": "RDS-EVENT-0091" }
After click send. you can wait a minute and see your resource on lambda monitor(metrics and log)
Conclusion
ππ»π We now have the ability to efficiently manage and provide the Lambda function with Terraform while simultaneously preparing related resources. In this article, I'll present an innovative approach to design the Serverless architecture, enabling Lambda to achieve what it couldn't do independently. By creating a co-worker to collaborate with Lambda, we open up new possibilities and enhance its capabilities.
ππ My hope is that this article will serve as an inspiration for you to leverage Serverless services in your own scenarios. Embrace the power of Serverless and discover how it can revolutionize your projects!ππ
Top comments (0)