DEV Community

Cover image for Uploading Images to AWS S3 with Serverless
Fabian Quijosaca
Fabian Quijosaca

Posted on • Updated on

Uploading Images to AWS S3 with Serverless

AWS S3 is one of the many services provided by Amazon Web Services (AWS), which allows you to store files, most of you probably already know. On the other hand, AWS Lambda is one of the most revolutionary services of our day, although the name may sound very intimidating, AWS Lambda is a computing platform that autonomously manages the computing resources required by the developed code and can execute code to any type of application or back-end service, the purpose of this service is to simplify the creation of applications, because it is not necessary to provision or manage servers, since AWS Lambda also takes care of everything necessary to run and scale your code with high availability, in addition you pay on demand, that is, for the processing time involved in executing the code.

The purpose of this post is to explain how to develop a back-end service, without a server serverless, to upload images (original and thumbnail), using the framework called serverless by the way was developed by the Coca Cola company, for the purpose of creating serverless applications even faster; according to Wikipedia:

Serverless Framework is a free, open source web framework written with Node.js. Serverless is the first framework developed to build applications on AWS Lambda, a serverless computing platform provided by Amazon as part of Amazon Web Services.

In the next few steps, I'll walk you through building a serveless based application, allowing image processing and uploading, on AWS S3, if you'd rather go straight to the code, here it is.

Note (updated): When using AWS Lambda to upload large files to S3, there are a certain limitations of Api Gateway and Lambdas to consider:

  1. Memory: Lambda functions are limited by the amount of memory they are allocated, and this can impact the size of the files that can be uploaded.
  2. Timeout: Lambda functions have a maximum execution time of 15 minutes. (You may need to increase the timeout limit to upload large files)
  3. Payload size: Lambda functions have a payload size limit of 6MB for synchronous invocations and 256KB for asynchronous invocations. (You may need to use an S3 multipart upload to split the file into smaller parts to upload large files)
  4. Network bandwidth: If the Lambda function is located in a region that is far from the S3 bucket, the upload speed may be slower.

Required Tools

  • Node JS 12
  • Serverless
  • AWS CLI

1. Install AWS CLI (Command Line Interface)

AWS CLI, is a unified tool for managing AWS services, it is a tool that allows you to control multiple AWS services from the command line. Once downloaded, add your profile with your respective AWS account and credential.

2. Install the serverless framework

Here is a link that explains this process in detail, https://serverless.com/framework/docs/getting-started/.

3. Run the following command to generate sample code with serverless.

First you need to create a folder, example: serveless-upload-image.

sls create --template hello-world
Enter fullscreen mode Exit fullscreen mode

The above command will create the following files:

  • serverless.yml
  • handler.js

In the serverless.yml file, you will find all the information for the resources required by the developed code, for example the infrastructure provider to be used such as AWS, Google Cloud or Azure, the database to be used, the functions to be displayed, the events to be heard, the permissions to access each of the resources, among other things.

The handle.js file contains the generated hello-world code, which is a simple function that returns a JSON document with status 200 and a message. We will rename this file to fileUploaderHome.js.

4. Install dependencies

npm init -y
npm install busboy && uuid && jimp && aws-sdk
Enter fullscreen mode Exit fullscreen mode

Since handling files is required, the client will send a POST request, encoding the body in multipart/form-data format, to decode that format, for which we will use the busboy library. In addition, it is necessary to make a thumbnail of the images, Jimp will be installed, also the library called uuid, to generate a unique identifier for the images, finally, the AWS SDK provides JavaScript objects to manage AWS services, such as Amazon S3, Amazon EC2, DynamoDB among others.

5. Create the function to decode the multipart/form-data

//formParser.js
const Busboy = require('busboy');

module.exports.parser = (event, fileZise) =>
    new Promise((resolve, reject) => {
    const busboy = new Busboy({
        headers: {
            'content-type':
            event.headers['content-type'] || event.headers['Content-Type']
        },
        limits: {
            fileZise
        }
    });

    const result = {
        files: []
    };

    busboy.on('file', (fieldname, file, filename, encoding, mimetype) => {
        const uploadFile = {}
        file.on('data', data => {
            uploadFile.content = data
        });
        file.on('end', () => {
            if (uploadFile.content) {
                uploadFile.filename = filename
                uploadFile.contentType = mimetype
                uploadFile.encoding = encoding
                uploadFile.fieldname = fieldname
                result.files.push(uploadFile)
             }
        })
    })

    busboy.on('field', (fieldname, value) => {
        result[fieldname] = value
    });

    busboy.on('error', error => {
        reject(error)
    })

    busboy.on('finish', () => {
        resolve(result);
    })

    busboy.write(event.body, event.isBase64Encoded ? 'base64' : 'binary')
    busboy.end()
 })
Enter fullscreen mode Exit fullscreen mode

6. Function that will process and upload the images to S3

Below is the step-by-step code that will allow to process the original image and thumbnail to be uploaded to S3.

//fileUploaderHome.js
"use strict";
const AWS = require("aws-sdk")
const uuid = require("uuid/v4")
const Jimp = require("jimp")
const s3 = new AWS.S3()
const formParser = require("./formParser")

const bucket = process.env.Bucket
const MAX_SIZE = 4000000 // 4MB
const PNG_MIME_TYPE = "image/png"
const JPEG_MIME_TYPE = "image/jpeg"
const JPG_MIME_TYPE = "image/jpg"
const MIME_TYPES = [PNG_MIME_TYPE, JPEG_MIME_TYPE, JPG_MIME_TYPE]

module.exports.handler = async event => {
    try {
        const formData = await formParser.parser(event, MAX_SIZE)
        const file = formData.files[0]

        if (!isAllowedFile(file.content.byteLength, file.contentType))
            getErrorMessage("File size or type not allowed")

        const uid = uuid()
        const originalKey = `${uid}_original_${file.filename}`
        const thumbnailKey = `${uid}_thumbnail_${file.filename}`

        const fileResizedBuffer = await resize( file.content, file.contentType, 460)
        const [originalFile, thumbnailFile] = await Promise.all([
            uploadToS3(bucket, originalKey, file.content, file.contentType),
            uploadToS3(bucket, thumbnailKey, fileResizedBuffer, file.contentType)
        ])

        const signedOriginalUrl = s3.getSignedUrl("getObject", { Bucket: originalFile.Bucket, Key: originalKey, Expires: 60000 })
        const signedThumbnailUrl = s3.getSignedUrl("getObject", { Bucket: thumbnailFile.Bucket, Key: thumbnailKey, Expires: 60000 })

        return {
            statusCode: 200,
            body: JSON.stringify({
                id: uid,
                mimeType: file.contentType,
                originalKey: originalFile.key,
                thumbnailKey: thumbnailFile.key,
                bucket: originalFile.Bucket,
                fileName: file.filename,
                originalUrl: signedOriginalUrl,
                thumbnailUrl: signedThumbnailUrl,
                originalSize: file.content.byteLength
             })
          }
    } catch (e) {
        return getErrorMessage(e.message)
    }
}
Enter fullscreen mode Exit fullscreen mode
  • The resize function (file.content, file.contentType, 460), will be explained in detail later, however in this line a thumbnail image is generated from the original image, with a width of 460 px, and a height determined automatically, this function receives the binary content of the original file, the type of the file and the size at which the thumbnail image will be generated. The await keyword will wait for the image resizing to finish processing to continue to the next line.

  • The uploadToS3 function receives 3 parameters, the bucket to which it will be uploaded, the key (key) of the file, the content in binary and the file type, and returns a promise, later on what this function does will be explained in detail.

  • Once we have original and the thumbnail file, it is uploaded to S3, in parallel with Promise.all(...), when it finishes uploading all files it returns an array with the information of each file that has been uploaded. Then the signed url *(getSignedUrl)** is obtained, with a specified expiration time, using the AWS S3 client.
    This function, finally in case everything is executed successfully, returns a JSON, with the information of the processed images.

In the following block, each one of the utilitarian functions used from the previous code block is detailed.

const getErrorMessage = message => ({ statusCode: 500, body: JSON.stringify( message })})

const isAllowedFile = (size, mimeType) => { // some validation code }

const uploadToS3 = (bucket, key, buffer, mimeType) =>
    new Promise((resolve, reject) => {
        s3.upload(
            { Bucket: bucket, Key: key, Body: buffer, ContentType: mimeType },
            function(err, data) {
                if (err) reject(err);
                resolve(data)
            })
    })

const resize = (buffer, mimeType, width) =>
    new Promise((resolve, reject) => {
        Jimp.read(buffer)
        .then(image => image.resize(width, Jimp.AUTO).quality(70).getBufferAsync(mimeType))
        .then(resizedBuffer => resolve(resizedBuffer))
        .catch(error => reject(error))
    })
Enter fullscreen mode Exit fullscreen mode

Well, so far we have reviewed each of the code blocks that allow image processing, validation and uploading to S3, however, the control file serverless.yml of the serverless framework needs to be covered, which allows us to detail the resources , service definitions, roles, settings, permissions, and more for our service.

#serverles.yml
service: file-UploaderService-foqc-home
custom:
    bucket: lambda-test-foqc-file-home
provider:
    name: aws
    runtime: nodejs12.x
    region: us-east-1
    stackName: fileUploaderHome
    apiGateway:
        binaryMediaTypes:
            - '*/*'
    iamRoleStatements:
        - Effect: "Allow"
        Action:
            - "s3:PutObject"
            - "s3:GetObject"
        Resource:
            - "arn:aws:s3:::${self:custom.bucket}/*"
functions:
    UploadFileHome:
        handler: fileUploaderHome.handler
        events:
            - http:
                path: upload
                method: post
                cors: true
        environment: Bucket: ${self:custom.bucket}
resources:
    Resources:
        StorageBucket:
            Type: "AWS::S3::Bucket"
            Properties:
                BucketName: ${self:custom.bucket}
Enter fullscreen mode Exit fullscreen mode
  1. service, refers to a project, is the name with which it will be deployed.

  2. custom, this section allows defining variables that can be used at various points in the document, centralizing the values ​​for development or deployment, therefore we add the bucket variable, with the value lambda-test-foqc-file-home, this value will be used to define the bucket in which the files will be stored.

  3. Provider, in this section the provider, the infrastructure and the respective permissions of resources is defined. As mentioned at the beginning of this blog, the provider to use is Amazon Web Services (aws), NodeJs 12, region in which it will be deployed is in the eastern United States, the default name of the CloudFormation stack (fileUploaderHome), however it is not required.
    The following line is important, to allow our Api Gateway support binary files; It is mandatory to declare the section apiGateway which has as one of its values ​​'* / *', which is a wildcard that defines, that any binary format, such as multipart/form-data, will be accepted. Then the permissions (iamRoleStatements) are defined, to allow access to S3 bucket, defined in the customization section ${self.custom.bucket}.

  4. Functions, this section defines each of the implementations of functions as services (Faas), it is a minimum unit of deployment, a service can be composed of several functions, and each of these must fulfill a single task, although it is just a recommendation. Each function must have a specific configuration, otherwise it will inherit one by default.
    The name of our function will be the following, UploadFileHome, which is invoked from an HTTP POST event in the path that is fired on demand and allows CORS, this event will be handled by our handler function that has already been implemented in the file *fileUploaderHome.

  5. Resources, finally in this section the resources to be used by each of the functions, defined above, are defined. The storage bucket (StorageBucket) is defined, which has the type (Type: 'AWS :: S3 :: Bucket') and in the property the name of the bucket (BucketName).

Finally! We have finished building our service, which uploads an image and its thumbnail to S3, so it is time to deploy the service, with the following command.

sls deploy --stage=test
Enter fullscreen mode Exit fullscreen mode

At the end of the deployment, the url of our service will be displayed, test its operation using postman, as shown in the image.
Alt Text

If the image uploading was successful, the service will return a JSON, with the information of the processed image, such as the key, the name, the url of the original file and the thumbnail.

To conclude, in case you need to remove the service, run the following command.

sls remove --stage=test
Enter fullscreen mode Exit fullscreen mode

Conclusions

This service can be used on demand by any external application or service, since it is not coupled to any business logic, in addition the code can be refactored so it can upload files in general, not only images, it could also receive as part of the http post event, the directory (path) of the bucket where you want to store the file, avoiding having a fixed directory. However, in a didactic way, it serves as a basis for creating a more robust and configurable service.

It has taken me several days to document and write this post, I am satisfied and I hope that this information has been useful for you.

Thank you!

Top comments (5)

Collapse
 
weaponizedlego profile image
WeaponizedLego

You make a note that it's not advised to use Lambdas, for this purpose due to limitations on AWS APIGateway and Lambda

Note: It is not recommended to use Lambdas for file uploads due to certain limitations of Api Gateway and Lambdas, if despite this you still want it, this blog is for you.

However it would be nice to see you outline those specific limitations you think of, as I don't see any limits that can't be increased to alleviate any concerns that might be, and I'm afraid I'm missing something that you have seen.

Collapse
 
foqc profile image
Fabian Quijosaca

I have updated the note. What I meant to say is that there are certain limitations to be aware of to deal with large files mainly.

Collapse
 
weaponizedlego profile image
WeaponizedLego

Thank you for the very quick update! 🚀

Collapse
 
chetanam profile image
Chetan

Great Work

Collapse
 
foqc profile image
Fabian Quijosaca

Thank you!