There are many ways to write server applications in many languages. There are many frameworks to help us building the server like Golang with mux, Java with Spring Boot, NodeJS with express etc. When it comes to hosting the server application and scaling the system, it requires lots of efforts, planning. Serverless is a cloud-native development model that allows developers to build and run applications without having to manage servers. So using serverless architecture model, we don't need to worry about provisioning-managing-scaling the servers, updating the security patches, someone hacking into our server and much more. These all will be taken care by the cloud provider. So we can say that, we should try to use serverless architecture for API wherever possible.
In this post, we will be talking about running JavaScript functions as an api application.
AWS Lambda
AWS Lambda is a serverless compute service which we will be using to deploy our JS functions. We just upload our code as a ZIP file and Lambda automatically and precisely allocates compute execution power and runs your code based on the incoming request or event, for any scale of traffic.
There are many frameworks available to write NodeJS serverless application like Architect, Up, Middy and many more. We will be using Serverless Framework to write our backend API application because we can use multiple programming languages and we will be using multiple other AWS services like S3, API Gateway and Serverless framework has our of the box support for it.
Serverless Framework
The Serverless Framework helps you develop and deploy your AWS Lambda functions, along with the AWS infrastructure resources they require. It's a CLI that offers structure, automation and best practices out-of-the-box, allowing you to focus on building sophisticated, event-driven, serverless architectures, comprised of Functions and Events.
Setup
Prerequisites
- NodeJS: >= 12
- Serverless CLI: Install command
npm install -g serverless
We will be writing simple NodeJS application with below file/folder structure or you can run serverless
command and setup a new project from template as well. You can clone the demo code from https://github.com/thakkaryash94/aws-serverless-ci-cd-demo.
.
├── handler.js
├── package-lock.json
├── package.json
└── serverless.yml
handler.js
'use strict';
module.exports.main = async (event, context) => {
return {
statusCode: 200,
body: JSON.stringify(
{
message: 'Hello from new service A!',
input: event,
},
null,
2
),
};
};
We will be updating our serverless.yml
file as below. We will be using AWS S3 to store the function zip and AWS API Gateway service to access our function as an API URL.
serverless.yml
service: servicea
frameworkVersion: "2"
provider:
name: aws
runtime: nodejs14.x
lambdaHashingVersion: 20201221
region: ${env:AWS_REGION}
apiGateway:
restApiId: ${env:AWS_REST_API_ID}
restApiRootResourceId: ${env:AWS_REST_API_ROOT_ID}
# delete below section if you don't want to keep the Lambda function zip in a bucket
deploymentBucket:
blockPublicAccess: true # Prevents public access via ACLs or bucket policies. Default is false
skipPolicySetup: false # Prevents creation of default bucket policy when framework creates the deployment bucket. Default is false
name: ${env:AWS_BUCKET_NAME} # Deployment bucket name. Default is generated by the framework
maxPreviousDeploymentArtifacts: 10 # On every deployment the framework prunes the bucket to remove artifacts older than this limit. The default is 5
functions:
main:
handler: handler.main # Function name
memorySize: 128
events:
- http:
path: servicea # URL path to access the function
method: get # Method name for API gateway
Now, you can run below command to execute the function locally. It will give you warning about missing environment variables like AWS_REGION, AWS_BUCKET_NAME etc but we can ignore them or you can comment them as well for the local development purpose.
$ serverless invoke local -f main
It will return the response as below.
{
"statusCode": 200,
"body": "{\n \"message\": \"Hello from new service A!\",\n \"input\": \"\"\n}"
}
So it means our serverless application is ready and working correctly locally. For the real project, we will need many of these applications. So each application will serve on individual API request.
We keep our code on VCS providers like GitHub, GitLab, BitBucket etc. The problem is it will be very difficult to maintain many repositories like below even for a single project. So to fix that, we can store all these applications in one repository and it will be called as Monorepo. That's the difference between monorepo and multirepo.
Setup Monorepo
To convert our serverless, we just need to create a folder eg. aws-serverless-ci-cd-demo
and move the folder services inside it.Now, we can have as many functions as we need for our project and they all will be inside a single repository.
Final overall structure will look like below.
aws-serverless-ci-cd-demo
├── README.md
├── servicea
│ ├── handler.js
│ ├── package-lock.json
│ ├── package.json
│ └── serverless.yml
└── serviceb
├── handler.js
├── package-lock.json
├── package.json
└── serverless.yml
We will be using AWS Lambda, AWS S3 and API Gateway to deploy and access our services. So let's discuss about the use of AWS S3 and API Gateway.
AWS S3
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.
We will store our Lambda function code as a zip in the bucket. The advantage is we can keep the track of Lambda functions with code and date-time. It is a totally optional step. You can skip it by commenting, deleting the deploymentBucket
code in serverless.yml file.
AWS API Gateway
Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services.
We will use API Gateway to expose our lambda function to a url so that we can access it. Serverless framework will use the path
, method
from http section and setup route based on it. In the demo, it will create a /servicea
route with GET
method. Serverless framework will map this route with our Lambda function.
CI/CD Deployment
The "CI" in CI/CD refers to continuous integration, which is an automation process to built, tested the code when developers push the code. In our application, creating a zip file of the functions.
The "CD" in CI/CD refers to continuous delivery and/or continuous deployment, which means deliver/deploy the code to the development, staging, uat, qa, production environment. In our application, it means deploying the zip to Lambda function and configuring API Gateway.
Fortunetly, Serverless framework has inbuild support for it. So all we need to do is proving aws credentials as environment variables like AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY. We will need few more variables because we are using other services like S3 and API Gateway.
We will be using GitHub actions to build, package and deploy our functions. So let's see how we can implement CI/CD using GitHub actions.
GitHub Actions
GitHub Actions help you automate tasks within your software development life cycle. GitHub Actions are event-driven, meaning that you can run a series of commands after a specified event has occurred. For example, every time someone creates a pull request for a repository, you can automatically run a command that executes a software testing script.
We can configure GitHub actions based on multiple triggers like on push
, pull_request
etc and as per different branches as well.
There are 2 ways to deploy the serverless functions.
-
Auto Deployment
- Trigger setup
name: Auto Serverless Deployment on: [push, pull_request]
- The first job is to detech the changes in repo files and folders and return the list. Here we will use git commit diff between current and last commit and use a filter which will return only added or updated files list. Then we will go through each file, collect only unique folders name and return as output.
jobs: changes: name: Changes runs-on: ubuntu-latest outputs: folders: ${{ steps.filter.outputs.folders }} steps: - uses: actions/checkout@v2 - name: Check changed files id: diff run: | if [ $GITHUB_BASE_REF ]; then # Pull Request git fetch origin $GITHUB_BASE_REF --depth=1 export DIFF=$( git diff --name-only origin/$GITHUB_BASE_REF $GITHUB_SHA ) echo "Diff between origin/$GITHUB_BASE_REF and $GITHUB_SHA" else # Push git fetch origin ${{ github.event.before }} --depth=1 export DIFF=$( git diff --diff-filter=d --name-only ${{ github.event.before }} $GITHUB_SHA ) echo "Diff between ${{ github.event.before }} and $GITHUB_SHA" fi echo "$DIFF" # Escape newlines (replace \n with %0A) echo "::set-output name=diff::$( echo "$DIFF" | sed ':a;N;$!ba;s/\n/%0A/g' )" - name: Set matrix for build id: filter run: | DIFF="${{ steps.diff.outputs.diff }}" if [ -z "$DIFF" ]; then echo "::set-output name=folders::[]" else JSON="[" # Loop by lines while read path; do # Set $directory to substring before / directory="$( echo $path | cut -d'/' -f1 -s )" # ignore .github folder if [[ "$directory" != ".github" ]]; then # Add build to the matrix only if it is not already included JSONline="\"$directory\"," if [[ "$JSON" != *"$JSONline"* ]]; then JSON="$JSON$JSONline" fi fi done <<< "$DIFF" # Remove last "," and add closing brackets if [[ $JSON == *, ]]; then JSON="${JSON%?}" fi JSON="$JSON]" echo $JSON # Set output echo "::set-output name=folders::$( echo "$JSON" )" fi
After adding a serverless service
servicea
, the build action will print something like below.Check changed files
Diff between 3da227687e19da14062916c6f71cef0c7e3f9033 and 96a8e3a39ab79ccff3a294ea485c4c3854d496c6 servicea/.gitignore servicea/handler.js servicea/package-lock.json servicea/package.json servicea/serverless.yml
Set matrix for build
["servicea"]
- Next, we will create a job for every folder name using matrix strategy as below.
deploy: needs: changes name: Deploy if: ${{ needs.changes.outputs.folders != '[]' && needs.changes.outputs.folders != '' }} strategy: matrix: # Parse JSON array containing names of all filters matching any of changed files # e.g. ['servicea', 'serviceb'] if both package folders contains changes folder: ${{ fromJSON(needs.changes.outputs.folders) }}
- Now, it's time to build and deploy the function. Please define all the env variables used in serverless.yml file in env section as below. So here, we will go through every folder and run
npx serverless deploy
. This command will create a zip, update it to S3, create/update the Lambda and finally configuring it with API Gateway.
runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Configure AWS Credentials uses: aws-actions/configure-aws-credentials@v1 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: ${{ secrets.AWS_REGION }} - name: deploy run: npx serverless deploy working-directory: ${{ matrix.folder }} env: AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} AWS_REST_API_ROOT_ID: ${{ secrets.AWS_REST_API_ROOT_ID }} AWS_REST_API_ID: ${{ secrets.AWS_REST_API_ID }} AWS_BUCKET_NAME: ${{ secrets.AWS_BUCKET_NAME }}
Before we push the code and action start to build and deploy the code, we need to add below secret environment variables. You should never use root account and always create a new user with restricted permissions based on use cases. In this process, our user will need access to Lambda, S3 write access and API Gateway access.
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- AWS_REGION
- AWS_REST_API_ROOT_ID
- AWS_REST_API_ID
- AWS_BUCKET_NAME: bucket name where we want our zip files to be stored, if you are ignoring `deploymentBucket` from `serverless.yml` file, you can ignore this variable as well.
auto.yml
name: Auto Serverless Deployment
on: [push, pull_request]
jobs:
changes:
name: Changes
runs-on: ubuntu-latest
outputs:
folders: ${{ steps.filter.outputs.folders }}
steps:
- uses: actions/checkout@v2
- name: Check changed files
id: diff
run: |
if [ $GITHUB_BASE_REF ]; then
# Pull Request
git fetch origin $GITHUB_BASE_REF --depth=1
export DIFF=$( git diff --name-only origin/$GITHUB_BASE_REF $GITHUB_SHA )
echo "Diff between origin/$GITHUB_BASE_REF and $GITHUB_SHA"
else
# Push
git fetch origin ${{ github.event.before }} --depth=1
export DIFF=$( git diff --diff-filter=d --name-only ${{ github.event.before }} $GITHUB_SHA )
echo "Diff between ${{ github.event.before }} and $GITHUB_SHA"
fi
echo "$DIFF"
# Escape newlines (replace \n with %0A)
echo "::set-output name=diff::$( echo "$DIFF" | sed ':a;N;$!ba;s/\n/%0A/g' )"
- name: Set matrix for build
id: filter
run: |
DIFF="${{ steps.diff.outputs.diff }}"
if [ -z "$DIFF" ]; then
echo "::set-output name=folders::[]"
else
JSON="["
# Loop by lines
while read path; do
# Set $directory to substring before /
directory="$( echo $path | cut -d'/' -f1 -s )"
# ignore .github folder
if [[ "$directory" != ".github" ]]; then
# Add build to the matrix only if it is not already included
JSONline="\"$directory\","
if [[ "$JSON" != *"$JSONline"* ]]; then
JSON="$JSON$JSONline"
fi
fi
done <<< "$DIFF"
# Remove last "," and add closing brackets
if [[ $JSON == *, ]]; then
JSON="${JSON%?}"
fi
JSON="$JSON]"
echo $JSON
# Set output
echo "::set-output name=folders::$( echo "$JSON" )"
fi
deploy:
needs: changes
name: Deploy
if: ${{ needs.changes.outputs.folders != '[]' && needs.changes.outputs.folders != '' }}
strategy:
matrix:
# Parse JSON array containing names of all filters matching any of changed files
# e.g. ['servicea', 'serviceb'] if both package folders contains changes
folder: ${{ fromJSON(needs.changes.outputs.folders) }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: deploy
run: npx serverless deploy
working-directory: ${{ matrix.folder }}
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REST_API_ROOT_ID: ${{ secrets.AWS_REST_API_ROOT_ID }}
AWS_REST_API_ID: ${{ secrets.AWS_REST_API_ID }}
AWS_BUCKET_NAME: ${{ secrets.AWS_BUCKET_NAME }}
- Manual
Sometimes, we want to deploy a function manually or it may be skipped by above script, we can deploy it manually because deploying it is more important than finding the issue at that time.
Here, we will skip the step to identify the files using git diff and returning the folder. We can directly go inside the function folder name and run the deployment command eg. npx serverless deploy
.
-
For manual deployment, we will take the function name as action input and deploy the specific function rather than deploying manually.
name: Manual Serverless Deployment on: push: branches: - main pull_request: branches: - main workflow_dispatch: inputs: function: description: "Function name" required: true
-
After this, we will use it in our job as below.
jobs: deploy: if: ${{ github.event_name == 'workflow_dispatch' }} name: deploy runs-on: ubuntu-latest steps: - uses: actions/checkout@master - name: deploy run: npx serverless deploy working-directory: ${{ github.event.inputs.function }} env: AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} AWS_REST_API_ROOT_ID: ${{ secrets.AWS_REST_API_ROOT_ID }} AWS_REST_API_ID: ${{ secrets.AWS_REST_API_ID }} AWS_BUCKET_NAME: ${{ secrets.AWS_BUCKET_NAME }}
manual.yml
name: Manual Serverless Deployment
on:
push:
branches:
- main
pull_request:
branches:
- main
workflow_dispatch:
inputs:
function:
description: "Function name"
required: true
jobs:
deploy:
if: ${{ github.event_name == 'workflow_dispatch' }}
name: deploy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- name: deploy
run: npx serverless deploy
working-directory: ${{ github.event.inputs.function }}
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REST_API_ROOT_ID: ${{ secrets.AWS_REST_API_ROOT_ID }}
AWS_REST_API_ID: ${{ secrets.AWS_REST_API_ID }}
AWS_BUCKET_NAME: ${{ secrets.AWS_BUCKET_NAME }}
Here, we saw, how we can setup auto, manual deployment for serverless application with CI/CD.
Top comments (0)