DEV Community

Cover image for Creating Thumbnail Image using Terraform, Lambda, IAM, CloudWatch and S3 Bucket
Alok Kumar for AWS Community Builders

Posted on

Creating Thumbnail Image using Terraform, Lambda, IAM, CloudWatch and S3 Bucket

In this Blog, we are going to use Terraform as IAC tools for resource provisioning.

Through, Terraform we are going to create below resources and establish the connectivity between them:

  1. Lambda
  2. IAM
  3. CloudWatch
  4. S3 bucket (Original and Thumbnail)

We are going to see how to use terraform to create an AWS Lambda function and configure an AWS S3 trigger.

The use case is to generate a thumbnail image whenever an image is uploaded to S3 Original bucket.

We will first write a Lambda function which will open the original image from a source bucket and create a thumbnail and store it in thumbnail bucket.

Then we will configure an S3 trigger to trigger this Lambda function whenever an image is uploaded to the S3 bucket.

The prerequisite for this project is to have terraform and AWS CLI configured.

I have already configured my CLI in my system using “aws configure” command.

I will create a new directory for the project, we’ll call it as “thumbnail_generation_lambda-main” we will open this project in the visual studio code.

We will create a Lambda function in python, which I will put it in a directory called src.

Image description

We will start writing the code, first we need to define a Lambda Handler, which will be the entry point for the Lambda function, which takes two parameters event and context.

I will import the logging Library and configure the logger.
As we begin, we will print the event and the context.

Image description

Next, we need to extract the bucket name and the key from the event. Whenever this Lambda function is called from an S3 trigger, the trigger will pass the event.

Object with the bucket and the key, which is a file name being uploaded, we need to extract the bucket name and key.

The first item in this record is S3 bucket name. This will be the bucket name and the key name. Then we will define the destination bucket name and the key.

We will call this bucket name as "alok-thumbnail-image-bucket-0007" and the destination file name we will just append underscore thumbnail to the original file name, for which I need to use OS dot path split, for that I will import OS library and I will pass the key which will return me the name and extension separately, thumbnail name extension from this I will build the thumbnail key which would be thumbnail name underscore thumbnail and the extension, this will be the destination key.

Image description

Next, we will get access to S3 for which, I need to use the import library boto3.

Open the file in S3 by passing bucket and key value, I will extract the body of this and read it as a byte string.

We will load this image in memory bytes IO for that we will import BytesIO Library, from IO import bytes IO then load the image in memory.

This will load the image in memory and then we will use pillow image object.

We will pass this byte IO image dot open and passing this byte IO. I'm passing this image, just for the information purpose we will log the size of this image before we actually compress.

To generate thumbnail this is the method IMG dot thumbnail and mention the preferred size, here I will mention 500 comma 500.

With image anti layers option passed the image would have been now compressed we will again log the size now to see what is the to see what is the final results after compression and now we need to dump this image into a destination bucket for that we need to write the image in a buffer and then pass this buffer directly to the destination bucket.

Buffer equal to bytes IO, we create a new by for object and then save the image to this buffer in the in the preferred format.

Image description

Now I will be dumping it into the destination bucket. We'll use this same data to validate whether the success whether the upload is successful or not.

Status code is not equal to 200 which is Success you will raise exception and then finishing the Lambda function by return success.

Image description

Here, in this case I am just returning the original event object with this we have finished writing the Lambda function, below is the final code of the lambda.py file

import logging
import boto3
from io import BytesIO
from PIL import Image
import os

logger = logging.getLogger()
logger.setLevel(logging.INFO)

def lambda_handler(event, context):
    logger.info(f"event: {event}")
    logger.info(f"context: {context}")

    bucket = event["Records"][0]["s3"]["bucket"]["name"]
    key = event["Records"][0]["s3"]["object"]["key"]

    thumbnail_bucket = "alok-thumbnail-image-bucket-0007"
    thumbnail_name, thumbnail_ext = os.path.splitext(key)
    thumbnail_key = f"{thumbnail_name}_thumbnail{thumbnail_ext}"

    logger.info(f"Bucket name: {bucket}, file name: {key}, Thumbnail Bucket name: {thumbnail_bucket}, file name: {thumbnail_key}")

    s3_client = boto3.client('s3')

    # Below snippet of code will Load and open image from S3
    file_byte_string = s3_client.get_object(Bucket=bucket, Key=key)['Body'].read()
    img = Image.open(BytesIO(file_byte_string))
    logger.info(f"Size before compression: {img.size}")

    # Below snippet of code will Generate thumbnail
    img.thumbnail((500,500), Image.ANTIALIAS)
    logger.info(f"Size after compression: {img.size}")

    # Below snippet of code will Dump and save image to S3
    buffer = BytesIO()
    img.save(buffer, "JPEG")
    buffer.seek(0)

    sent_data = s3_client.put_object(Bucket=thumbnail_bucket, Key=thumbnail_key, Body=buffer)

    if sent_data['ResponseMetadata']['HTTPStatusCode'] != 200:
        raise Exception('Failed to upload image {} to bucket {}'.format(key, bucket))

    return event

Enter fullscreen mode Exit fullscreen mode

We will then move on to configure part of Terraform (Main.tf).

We are going to mention all the details in the terraform for provisioning the resources like:

1. AWS Provider

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "4.36.1"
    }
    archive = {
      source  = "hashicorp/archive"
      version = "~> 2.2.0"
    }
  }
  required_version = "~> 1.0"
}

provider "aws" {
  region = var.aws_region
}
Enter fullscreen mode Exit fullscreen mode

2. Archive Provider - Used for bundling the source to zip file and upload it to lambda.

3. Original and Thumbnail S3 bucket.

resource "aws_s3_bucket" "thumbnail_original_image_bucket" {
  bucket = "alok-original-image-bucket-0007"
}

resource "aws_s3_bucket" "thumbnail_image_bucket" {
  bucket = "alok-thumbnail-image-bucket-0007"
}
Enter fullscreen mode Exit fullscreen mode

4. Setting the policy of Get (Original) and Put object (Thumbnail).

resource "aws_iam_policy" "thumbnail_s3_policy" {
  name = "thumbnail_s3_policy"
  policy = jsonencode({
    "Version" : "2012-10-17",
    "Statement" : [{
      "Effect" : "Allow",
      "Action" : "s3:GetObject",
      "Resource" : "arn:aws:s3:::alok-original-image-bucket-0007/*"
      }, {
      "Effect" : "Allow",
      "Action" : "s3:PutObject",
      "Resource" : "arn:aws:s3:::alok-thumbnail-image-bucket-0007/*"
    }]
  })
}

Enter fullscreen mode Exit fullscreen mode

5. Lambda IAM Role to assume the role

resource "aws_iam_role" "thumbnail_lambda_role" {
  name = "thumbnail_lambda_role"
  assume_role_policy = jsonencode({
    "Version" : "2012-10-17",
    "Statement" : [{
      "Effect" : "Allow",
      "Principal" : {
        "Service" : "lambda.amazonaws.com"
      },
      "Action" : "sts:AssumeRole"
    }]
  })
}
Enter fullscreen mode Exit fullscreen mode

6. IAM Policy Attachment, role for s3 and role for lambda

resource "aws_iam_policy_attachment" "thumbnail_role_s3_policy_attachment" {
  name       = "thumbnail_role_s3_policy_attachment"
  roles      = [aws_iam_role.thumbnail_lambda_role.name]
  policy_arn = aws_iam_policy.thumbnail_s3_policy.arn
}
resource "aws_iam_policy_attachment" "thumbnail_role_lambda_policy_attachment" {
  name       = "thumbnail_role_lambda_policy_attachment"
  roles      = [aws_iam_role.thumbnail_lambda_role.name]
  policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
Enter fullscreen mode Exit fullscreen mode

7. Configuration to zip the lambda file to upload “my-lambda-code.zip”

resource "aws_lambda_function" "thumbnail_lambda" {
  function_name = "thumbnail_generation_lambda"
  filename      = "${path.module}/my-lambda-code.zip"

  runtime     = "python3.9"
  handler     = "lambda.lambda_handler"
  memory_size = 256

  source_code_hash = data.archive_file.thumbnail_lambda_source_archive.output_base64sha256

  role = aws_iam_role.thumbnail_lambda_role.arn

  layers = [
    "arn:aws:lambda:ap-south-1:770693421928:layer:Klayers-p39-pillow:1"
  ]
}
Enter fullscreen mode Exit fullscreen mode

8. Setting lambda permission to Thumbnail bucket to put the thumbnail.

resource "aws_lambda_permission" "thumbnail_allow_bucket" {
  statement_id  = "AllowExecutionFromS3Bucket"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.thumbnail_lambda.arn
  principal     = "s3.amazonaws.com"
  source_arn    = aws_s3_bucket.thumbnail_original_image_bucket.arn
}

resource "aws_s3_bucket_notification" "thumbnail_notification" {
  bucket = aws_s3_bucket.thumbnail_original_image_bucket.id

  lambda_function {
    lambda_function_arn = aws_lambda_function.thumbnail_lambda.arn
    events              = ["s3:ObjectCreated:*"]
  }

  depends_on = [
    aws_lambda_permission.thumbnail_allow_bucket
  ]
}

Enter fullscreen mode Exit fullscreen mode

9. Creation of CloudWatch log group for logging purpose and monitoring traces.

resource "aws_cloudwatch_log_group" "thumbnail_cloudwatch" {
  name = "/aws/lambda/${aws_lambda_function.thumbnail_lambda.function_name}"

  retention_in_days = 30
}
Enter fullscreen mode Exit fullscreen mode

Side by side, we created 2 more files:

1.variables.tf

variable "aws_region" {
  description = "AWS region for all resources."

  type    = string
  default = "ap-south-1"
}

Enter fullscreen mode Exit fullscreen mode

2. output.tf

output "iam_arn" {
  description = "IAM Policy ARN"
  value       = aws_iam_policy.thumbnail_s3_policy.arn
}

output "function_name" {
  description = "Lambda function name"
  value       = aws_lambda_function.thumbnail_lambda.function_name
}

output "cloud_watch_arn" {
  description = "Cloudwatch ARN"
  value       = aws_cloudwatch_log_group.thumbnail_cloudwatch.arn
}
Enter fullscreen mode Exit fullscreen mode

Now, its time to do the deployment by executing below command:

1. terraform Init => This is where you initialize your code to download the requirements mentioned in your code.

Image description

2. terraform fmt => This is used to format the written terraform code.

Image description

3.terraform validate => This is to validate the written terraform code.

Image description

4.terraform plan => This is where you review changes and choose whether to simply accept them.

Image description

5.terraform apply => This is where you accept changes and apply them against real infrastructure.

Image description

Verification of S3 Bucket:

Image description

Verification of Lambda Creation:

Image description

Verification of CloudWatch Creation:

Image description

Now uploading one image “sample.jpg” of size 1.1MB into original S3 bucket “alok-original-image-bucket-0007”.

Image description

Verifying S3 Bucket for image uploaded:
Original:
Image description

Thumbnail

Image description

See the image size decreased from 1.1 MB to 20.7 KB.
You can destroy the resources created using “terraform destroy” command but before that you need to clean both the bucket. Otherwise terraform will raise an exception.

Image description

One of the best use cases of this project is to use in shortlisting candidate resume for some selected keyword, pick and move that resume to another s3 bucket.

Candidate will upload the resume and it will go into S3 original bucket and Lambda will get trigger after upload and on the basis of some selected keyword it will shortlist the resume and move the selected resume to shortlisted_s3 bucket 😊.

Top comments (0)