DEV Community

Cover image for Superpower REST API DX with Serverless ⚡ and DevOps Best Practices on AWS
Davide De Sio
Davide De Sio

Posted on • Updated on

Superpower REST API DX with Serverless ⚡ and DevOps Best Practices on AWS

⚡ Yet another serverless article?

There are a lot of great article about serverless and how to build and deploy REST APIs using AWS: however IMHO there is a lack of examples on how serverless superpowers DX and enables DevOps culture, while still let devs focus primarily on the only thing that matters for the business (business logic, one of the pillars of using serverless itself).

In this article I would like to show several concepts of DevOps Culture that a dev team could apply because they are enabled by a serverless stack, independently from the stack itself, language or the cloud provider.
My examples are based on Node.js using Serverless Framework on AWS, but in my opinion, those concepts should be covered regardless of the stack.

👨‍💻 Improving DX with Serverless and IaC

Serverless presents a huge opportunity to have a development platform and a good Developer Experience (DX), allowing devs to focus on writing code (and tests!), without loosing control over infrastructure, security, CI/CD and so on.

With serverless approach (but we should say with any cloud native development approach), infrastructure is part of the living code: devs should describe their API endpoint with Infrastructure as Code and seamlessly integrate resource provisioning into CI/CD pipelines without any additional overhead.

This serves as a starting point for developers to embrace DevOps culture: while serverless allows them to concentrate on code, they also have access to the ecosystem and platform to delve deeper into cloud concepts such as architecture, networking, security, and so forth..

Nowadays, devs should not code without understanding their cloud environment (or wherever their code will be released), and serverless dramatically accelerate cloud adoption as it learning curve is simpler, because devs will use primarily managed services and available frameworks to simplify and gradually learn and deepen cloud/ops concepts.

There is a wide range of tech and frameworks which will offer IaC capabilities for going serverless: I use Serverless Framework, over SAM by AWS, because it suites my preferred stack (Node.js, one of the supported engine for AWS Lambda), boasts a vibrant community with numerous plugins that enhance my team's Developer Experience (DX), and allows me to expand its capabilities simply by leveraging AWS CloudFormation.

A must-have plugin is undoubtedly Serverless Offline, which is a fantastic tool for quickly setting up a local environment without the need for any other components.

Just install it and run:

sls offline
Enter fullscreen mode Exit fullscreen mode

To get all available endpoints in your serverless definition.
Image description

📄 Doc as code

Typically, to understand what a functionality/endpoint should do, developers won't start coding: or at least they should not! They will begin by gathering requirements, which should ideally be described in some sort of documentation (hopefully!).

Writing an API means, first of all, that a dev should find a way to write a contract or document which should be:

  • validated by customers (or QA team)
  • used by consumers (front-end team)
  • implemented (back-end team)
  • released (ops team)

Those are also the actors in DevOps culture.
Image description

This is where the OpenAPI specification usually comes in to save a developer's life in the API development lifecycle: it provides a simple way to describe requirements and design them as a contract between all those stakeholders. This specification is language-agnostic, and anyone can use it in their own stack.

Image description

The good news about using serverless IaC, is that we can integrate documentation directly in our IaC definitions (Doc As Code) for each function/endpoint. We can then generate static documentation from it using available tools such as Redocly.

Moreover the same specification could be part of the IaC itself: as an example, request models and response models which describe the way to interact with - input - and what your endpoint should return - output.

Here is a IaC with Serverless Framework of a simple "Hello World" function/endpoint (only the part of the function itself): documentation including summary, description, and responses is incorporated into the IaC definition, and will be used while provisioning the route and methods on Amazon API Gateway.

...
hello:
  handler: src/function/hello/index.handler #function handler
  events: #events
    #api gateway event
    - http:
        path: /hello #api endpoint path
        method: 'GET' #api endpoint method
        documentation:
          summary: "/hello"
          description: "Just a sample GET to say hello"
          methodResponses:
            - statusCode: 200
              responseBody:
                description: "A successful response"
              responseModels:
                application/json: "HelloResponse"
            - statusCode: 500
              responseBody:
                description: "Internal Server Error"
              responseModels:
                application/json: "ErrorResponse"
            - statusCode: 400
              responseBody:
                description: "Request error"
              responseModels:
                application/json: "BadRequestResponse"
...
Enter fullscreen mode Exit fullscreen mode

What's particularly advantageous about this approach is that devs can transform this specification into an OpenAPI spec using simple plugins such as Serverless OpenAPI Documenter.

In yaml format using this command:

sls openapi generate -o ./doc/build/openapi.yaml -a 3.0.3 -f yaml
Enter fullscreen mode Exit fullscreen mode

Or in json format, obtaining also a useful Postman collection

sls openapi generate -o ./doc/build/openapi.json -a 3.0.3 -f json -p ./doc/build/postman.json
Enter fullscreen mode Exit fullscreen mode

We've discussed the significance of the OpenAPI specification for all stakeholders in the development lifecycle. Additionally, having it means that developers can utilize any OpenAPI-ready tool to generate live documentation. This documentation can then be deployed with your CI/CD pipeline on any hosting platform (for example, an S3 bucket exposed via CloudFront) to simplify the process of reading the contract/documentation for everyone, including QA, consumers, and developers themselves.
I use Redocly for this task.

Here is the command

redocly build-docs ./doc/build/openapi.yaml --output=./doc/build/index.html
Enter fullscreen mode Exit fullscreen mode

Which gives an output like:
Image description

Do we see what's the point there?
Writing Infrastructure as Code (IaC) and Documentation as Code (DaC) simultaneously provides us with a very good Developer Experience (DX) for accomplishing boring/non-business tasks for developers, such as writing documentation. However, this process still adds a lot of value to the business itself, which always requires well-documented software.

Furthermore, since documentation lives within the code, it will be versioned in your repository (using git, for instance), which becomes the history of our API changes.

✅ TDD

Let's take another step forward: We now have documentation detailing what our API should do. This documentation serves as a specification of the behaviors we should provide to our API endpoints, which must be respected during coding and expected during execution.

Doesn't it sound familiar to us?

This is precisely what we need to describe a test.

Is there a way to use OpenAPI specification to write a simple behavioral test which is validated against it?
Yes, we can use a validator to extend our test environment validations (or implement it by ourselves).
As I use jest and OpenAPIValidators are available for it, it's simple for me to use them in development lifecycle.

"A picture worth more than thousand words"

Image description
(image taken from OpenAPIValidators repo)

Let's write a simple test verifying that our "hello world" function gives a successful response (status 200) and respect our "HelloResponse" model specification (we have seen it before in our yaml IaC/DaC definition).

'use strict';
const path = require('path');
// tests for hello

const mod = require('./../../src/function/hello/index');

const jestPlugin = require('serverless-jest-plugin');
const lambdaWrapper = jestPlugin.lambdaWrapper;
const wrapped = lambdaWrapper.wrap(mod, { handler: 'handler' });

// Import jestOpenAPI plugin
const jestOpenAPI = require('jest-openapi').default;
// Load an OpenAPI file (YAML or JSON) into this plugin
jestOpenAPI('openapi.json');

describe('hello', () => {

  it('Test hello function', () => {
    return wrapped.run({}).then((response) => {
      //Expect response to be defined
      expect(response).toBeDefined();
      //Validate status
      expect(response.statusCode).toEqual(200);
      //Validate response against HelloResponse schema
      expect(JSON.parse(response.body)).toSatisfySchemaInApiSpec("HelloResponse");
    });
  });
});

Enter fullscreen mode Exit fullscreen mode

Once again, this is a huge improvement in API development lifecycle and for DX: we are providing developer a simple standard to write test BEFORE writing a single line of the function code (essentially Test Driven Development):

  • write you IaC for your function
  • decorate it with DaC
  • use the generated Doc to setup your test
  • THEN write your function until your test are green.

Obviously, testing against the OpenAPI spec does not cover all needed tests in software development, but at least it assure that behavioral one are covered: one of the first problem on approaching TDD for a dev is "what test should I write?".
Well, at least the ones which assure that your API is compliant with your specification: if everything matches, only the specification itself could be wrong, thus meaning that there is a lack of requirements.

Plus, using a test framework to point out coverage, gives devs a good starting point of how many lines of code are covered by tests.

Image description

Do we know the best way to engage developers in anything that really matters for software development but isn't strictly business critical? Provide them with numbers and metrics to satisfy, or something red to turn green: they'll strive to ensure everything is perfect!

As we have test, we can now execute or integrate them in our pipeline as simple as adding a line of code for running jest:

jest --coverage
Enter fullscreen mode Exit fullscreen mode

🔐 Security by design

"Please do not miss security"
This is what my good friend and workmate A. Pagani says to me before I can even say hello to him.

My typical answer is that his concerns are covered by a well-architected software, using AWS WAF in front of my API and Amazon Cognito as User and Identity Pool to access it giving the right IAM role to consumers (but again those are only managed services by AWS, we should be able to integrate any other we prefer).

Jokes apart, security should be a matter by design, and architecture components to ensure it should never be overlooked in our architecture.

For this task I prefer to extend Serverless Framework with AWS Cloudformation, including specific files for each component needed.

Here is how I include them

...
## Create resources with separate CloudFormation templates
resources:
  ## WAF
  - ${file(serverless-waf.yml)}
  ## Cognito
  - ${file(serverless-cognito-user-pool.yml)}
  - ${file(serverless-cognito-identity-pool.yml)}
...
Enter fullscreen mode Exit fullscreen mode

Here is a sample for WAF and association with Amazon API Gateway created by serverless

Resources:
  WAFv2WebACL:
    Type: "AWS::WAFv2::WebACL"
    Properties:
      DefaultAction:
        Allow: {}
      Description: WebACL-${self:custom.service}-${self:custom.stage}
      Name: WebACL-${self:custom.service}-${self:custom.stage}
      Rules:
        ...

  LogGroupAPIRestPublic:
    Type: AWS::Logs::LogGroup
    Properties:
      LogGroupName: aws-waf-logs-${self:custom.service}-${self:custom.stage}
      RetentionInDays: 365

  WAFv2LoggingConfiguration:
    DependsOn:
      - WAFv2WebACL
      - LogGroupAPIRestPublic
    Type: "AWS::WAFv2::LoggingConfiguration"
    Properties:
      LogDestinationConfigs:
        - !GetAtt LogGroupAPIRestPublic.Arn
      ResourceArn: !GetAtt WAFv2WebACL.Arn

  WAFv2WebACLAssociation:
    DependsOn:
      - WAFv2WebACL
    Type: "AWS::WAFv2::WebACLAssociation"
    Properties:
      ResourceArn:
        Fn::Join:
          - ''
          - - 'arn:aws:apigateway:eu-west-1::/restapis'
            - "/"
            - !Ref MyApiGW
            - "/stages/"
            - !Sub ${self:custom.stage}
      WebACLArn: !GetAtt WAFv2WebACL.Arn
Enter fullscreen mode Exit fullscreen mode

Here is a sample for Amazon Cognito User Pool

Resources:
  CognitoUserPool:
    Type: AWS::Cognito::UserPool
    Properties:
      # Generate a name based on the stage
      UserPoolName: user-pool-${self:custom.service}-${self:custom.stage}
      # Set email as an alias
      UsernameAttributes:
        - email
      AutoVerifiedAttributes:
        - email

  CognitoUserPoolClient:
    Type: AWS::Cognito::UserPoolClient
    Properties:
      # Generate an app client name based on the stage
      ClientName: user-pool-client-${self:custom.service}-${self:custom.stage}
      UserPoolId:
        Ref: CognitoUserPool
      ExplicitAuthFlows:
        - ADMIN_NO_SRP_AUTH
      GenerateSecret: false

# Print out the Id of the User Pool that is created
Outputs:
  UserPoolId:
    Value:
      Ref: CognitoUserPool

  UserPoolClientId:
    Value:
      Ref: CognitoUserPoolClient
Enter fullscreen mode Exit fullscreen mode

Here is a sample for Amazon Cognito Identity Pool

Resources:
  # The federated identity for our user pool to auth with
  CognitoIdentityPool:
    Type: AWS::Cognito::IdentityPool
    Properties:
      # Generate a name based on the stage
      IdentityPoolName: identity-pool-${self:custom.service}-${self:custom.stage}
      # Don't allow unathenticated users
      AllowUnauthenticatedIdentities: false
      # Link to our User Pool
      CognitoIdentityProviders:
        - ClientId:
            Ref: CognitoUserPoolClient
          ProviderName:
            Fn::GetAtt: [ "CognitoUserPool", "ProviderName" ]

  # IAM roles
  CognitoIdentityPoolRoles:
    Type: AWS::Cognito::IdentityPoolRoleAttachment
    Properties:
      IdentityPoolId:
        Ref: CognitoIdentityPool
      Roles:
        authenticated:
          Fn::GetAtt: [CognitoAuthRole, Arn]

  # IAM role used for authenticated users
  CognitoAuthRole:
    Type: AWS::IAM::Role
    Properties:
      Path: /
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: 'Allow'
            Principal:
              Federated: 'cognito-identity.amazonaws.com'
            Action:
              - 'sts:AssumeRoleWithWebIdentity'
            Condition:
              StringEquals:
                'cognito-identity.amazonaws.com:aud':
                  Ref: CognitoIdentityPool
              'ForAnyValue:StringLike':
                'cognito-identity.amazonaws.com:amr': authenticated
      Policies:
        - PolicyName: cognito-authorized-policy-${self:custom.service}-${self:custom.stage}
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: 'Allow'
                Action:
                  - 'mobileanalytics:PutEvents'
                  - 'cognito-sync:*'
                  - 'cognito-identity:*'
                Resource: '*'

              # Allow users to invoke our API
              - Effect: 'Allow'
                Action:
                  - 'execute-api:Invoke'
                Resource:
                  Fn::Join:
                    - ''
                    -
                      - 'arn:aws:execute-api:'
                      - Ref: AWS::Region
                      - ':'
                      - Ref: AWS::AccountId
                      - ':'
                      - Ref: ApiGatewayRestApi
                      - '/*'

# Print out the Id of the Identity Pool that is created
Outputs:
  IdentityPoolId:
    Value:
      Ref: CognitoIdentityPool

Enter fullscreen mode Exit fullscreen mode

Don't forget to secure our functions in a Virtual Private Cloud (VPC), implement high availability (HA) with subnets in different AZ, and use security groups. We can accomplish this on a per-function basis or globally for our API under the provider section of the serverless.yaml.

provider: 
  name: aws
  deploymentMethod: direct
  # Block public access on deployment bucket
  deploymentBucket:
    blockPublicAccess: true
  # The AWS region in which to deploy (us-east-1 is the default)
  region: ${env:AWS_REGION}
  runtime: nodejs18.x
  environment:
    # environment variables
    APP_ENV: ${env:APP_ENV}
  ## VPC Configuration
  vpc:
    securityGroupIds:
      - ${env:SG1}
    subnetIds:
      - ${env:SUBNET1}
      - ${env:SUBNET2}
      - ${env:SUBNET3}
Enter fullscreen mode Exit fullscreen mode

📈 Monitoring

We should cover another pillar of DevOps culture: monitor our API at runtime. We'll use another great plugin written for SAM, Serverless Framework, CDK and Cloudformation: SlicWatch from folks at fourTheorem (which for sure knows what to do on AWS and Serverless).

Just install it and plug into your serverless.yaml enabling it with a single line to see the magic happen.

Install:

npm install serverless-slic-watch-plugin --save-dev
Enter fullscreen mode Exit fullscreen mode

Enable it:

plugins:
  - serverless-slic-watch-plugin
Enter fullscreen mode Exit fullscreen mode

We could get dashboard and alarms on Amazon Cloudwatch for a lot of services, in our case specifically for Amazon API Gateway and AWS Lambda.

AWS Lambda:
Image description

Amazon Api Gateway:
Image description

Another good service we can enable just with a line of code in our serverless.yaml is AWS X-Ray. We can configure it on a per-function basis, but I prefer to enable it in the provider section to ensure it's enabled for each function/endpoint.

...
# Enable lambda tracing with xray
tracing:
  lambda: true
...
Enter fullscreen mode Exit fullscreen mode

Finally, do not forget to implement a robust logging strategy in your code, as AWS Lambda has built-in integration with AWS CloudWatch, which provides powerful capabilities for searching and identifying issues at runtime.

The crucial point here is not necessarily how we choose to implement it (we can also use AWS CloudFormation to create what we need without any plugin or framework), but rather that monitoring should be integrated into your Infrastructure as Code (IaC) and development lifecycle.

🔁 CI/CD

Let's bring everything together:

  • We've described our API infrastructure using Infrastructure as Code (IaC).
  • It's documented using Documentation as Code (DaC) with OpenAPI.
  • Security measures are implemented by design, including AWS WAF, VPC, subnets, security groups, Amazon Cognito, and AWS IAM roles.
  • Test coverage is comprehensive thanks to Test Driven Development (TDD).
  • Development is done (I won't doubt about it!)
  • We have a monitoring strategy in place using Amazon CloudWatch and AWS X-Ray.

We are finally ready to go into the cloud. ☁️☁️☁️

This should be as simple as choosing your CI/CD provider (AWS in our case) and letting the pipeline repeat commands we have seen as part of our development process itself.

Here is a sample of a buildspec.yaml file to run a CI/CD pipeline on AWS CodePipeline with AWS CodeBuild: before deploying the API, at the very last line, it generates the OpenAPI specification in various formats, a static doc, and finally uploads everything to S3 (using environment variables for stage and bucket name).

version: 0.2
phases:
  install:
    runtime-versions:
      nodejs: 18
    commands:
      # Install serverless and other dependencies
      - npm install 
  build:
    commands:
      # Generate documentation
      - sls openapi generate -o ./doc/openapi.yaml -a 3.0.3 -f yaml
      - sls openapi generate -o ./doc/openapi.json -a 3.0.3 -f json -p ./doc/postman.json
      - redocly lint ./doc/openapi.yaml --generate-ignore-file
      - redocly build-docs ./doc/openapi.yaml --output=./doc/index.html
      # Move generated documentation to s3
      - aws s3 cp --recursive ./doc s3://$DOC_BUCKET/$STAGE_NAME
      # Run tests
      - jest --coverage
      # Deploy API with serverless
      - sls deploy --stage $STAGE_NAME #deploy serverless specific stage

Enter fullscreen mode Exit fullscreen mode

We could provision AWS Pipelines and AWS CodeBuild resources with another AWS CloudFormation template, but I won't include it here as it's beyond the scope of this article.

🏁 Final Thoughts

The outcome will be an API deployed on Amazon API Gateway, backed by AWS lambdas with a dedicated Security Group, isolated within a specific VPC, and made highly available across different subnets in different AZ. It will be protected by AWS WAF and accessible through interaction with Amazon Cognito, which provides appropriate AWS IAM roles to consumers. Additionally, the API documentation, written as code, will be exposed using AWS S3 and Amazon CloudFront.

This could be, as an example, an API for a Mobile Backend or an Single Page Application as described here
Image description

I would like to conclude this article remarking that the resulting architecture is just one aspect of the focus: the true game changer in serverless is the process of achieving it, by implementing DevOps best practices and enhancing Developer Experience (DX) to allow developers to fully enjoy the journey.

I love to travel, but hate to arrive
Albert Einstein

Another important result is that your infrastructure files (those one who compose your serverless.yml) serve as a common language between dev and ops departments which will be understood by all stakeholders of development lifecycle.
I'm not saying that we've solved the neverending battle between devs and ops, but at least we are building a bridge with solid foundations: I feel that we now have a platform for enhancing team Developer Experience (DX) that can also be comprehended by our operations colleagues. This platform can aid the development team in understanding any required cloud operations concepts that may be missing.

🌐 Resources

You can find a skeleton of this architecture open sourced by Eleva here.
It has a hello function and a basic CRUD on an hypothetical USER route/entity, which you can use to start developing your own API.

🏆 Credits

A heartfelt thank you to my colleagues:

  • A. Fraccarollo and, again, A. Pagani, as the co-authors of CF files and watchful eyes on the networking and security aspect.
  • C. Belloli and L. Formenti to have pushed me to going out from my nerd cave.
  • L. De Filippi for enabling us to make this repo Open Source and explain how we develop Serverless APIs at Eleva.

We all believe in sharing as a tool to improve our work; therefore, every PR will be welcomed.

⏭️ Next steps

In our Serverless Node API Skeleton repository, you will find several other concepts not covered by this article, which I'll probably discuss in future ones (please request if anyone is of your interest), as:

  • FinOps with Serverless
  • AWS Lambda in private subnets using AWS NAT Gateway
  • AWS Lambda Layers for shared functions
  • AWS Lambda Layers for Node.js dependencies
  • Mitigation of AWS Lambda Cold Start strategies
  • Versioning as separate stacks using serverless stages
  • Database: Amazon DynamoDB / Amazon RDS provisioning
  • Reducing cloud footprint package with package patterns
  • Reducing cloud footprint for AWS Lambda version with serverless-prune-plugin by Clay Gregory
  • Autogenerate test with serverless-jest-plugin by folks at Nordcloud
  • serverless-jetpack for faster deployment
  • This same stack to develop a serverless OCR using Amazon Textract

📖 Further Readings

Unleashing the power of serverless for solo developers

🙋 Who am I

I'm D. De Sio and I work as a Solution Architect and Dev Tech Lead in Eleva.
I'm currently (Apr 2024) an AWS Certified Solution Architect Professional, but also a User Group Leader (in Pavia) and, last but not least, a #serverless enthusiast.
My work in this field is to advocate about serverless and help as more dev teams to adopt it, as well as customers break their monolith into API and micro-services using it.

Top comments (0)