DEV Community

szymon-szym
szymon-szym

Posted on

AWS API Gateway with Lambda Web Adapter and Rust (axum)

Problem

When building serverless APIs, the most natural way is to create separate functions for each endpoint. This approach lets keep functions small, which significantly impacts performance.

On the other hand, if we need to handle a lot of endpoints, we could miss the developer experience of working with web frameworks. Thanks to the amazing AWS Lambda Web Adapter we can run applications created with Expressjs, Flask, Spring Boot (or any web framework) as a lambda function.

I am going to play around with a web adapter and check if there is a chance that the pros of this solution outweigh the cons.

The neat part is that I will use axum, which is a Rust web framework. With this approach, I hope to minimize penalties for bringing more code to the lambda function.

Tradeoffs

Latency

This is number one. When going with a web-framework approach we should be prepared for a much longer cold start. Not only lambda package will be bigger, but now, during initialization, the whole server needs to be set up for us.

The nonobvious observation is that, depending on the shape of the traffic our app receives, it is possible, that overall response time will be shorter. That's because there is a chance for fewer cold starts in general, as the same lambda is reused for different endpoints.

Developer experience

The serverless experience is more and more smooth, but well-established frameworks have great tooling around. And often developers have experience working with them.

Running web apps locally is trivial. The feedback loop is shorter. Testing is easier. Mature IDE support is in place.

Additional layer

To run the web app in a lambda function we need a proxy that will translate the event to an HTTP request. The Lambda Web Adapter is truly an amazing extension, but it will create one more dependency.

Project

Code is available in this repository

I want to check if I could use a web framework inside the lambda function and at the same time run away from the consequences of my decisions.

The idea is to take advantage of the fact, that Rust is extremely well-suited for AWS Lambda and use a web framework written in it. My assumption is, that, to some level of complexity, starting the whole web app in Rust will be fast enough to provide satisfying performance.

I create a web app with a few CRUD endpoints. I use DynamoDB as a persistent storage.

Architecture

I configure API Gateway to pass requests to the Lambda function. Lambda Web Adapter is initialized before Lambda and provides a translation layer between API Gateway and the function. Lambda uses DynamoDB as a persistent layer.

Application

Let's start by creating the application. I create a new folder and run cargo new --bin lambda-axum-server inside.

Configuration

I install two dependencies:



cargo add clap -F derive,env
cargo add dotenv


Enter fullscreen mode Exit fullscreen mode

It is convenient to define the configuration in a separate file, so in the src folder I create a config.rs file just next to main.rs



// config.rs
// read configuration from environment variables or from command line args
#[derive(clap::Parser, Debug)]
pub struct Config {
    #[clap(long, env)]
    pub dynamo_table: String,
    #[clap(long, env)]
    pub aws_region: String,
    #[clap(long, env)]
    pub aws_profile: Option<String>,
}


Enter fullscreen mode Exit fullscreen mode

In the root folder, I put .env.sample



DYNAMO_TABLE=
AWS_REGION=
AWS_PROFILE=


Enter fullscreen mode Exit fullscreen mode

Let's start adding content to the main.rs file. This will be an entry point, so I would like only to initialize the app there. I need aws-sdk dependencies



cargo add aws-config -F behavior-version-latest
cargo add aws-sdk-dynamodb


Enter fullscreen mode Exit fullscreen mode

In the main.rs I read the configuration and created a dynamo client



pub mod config;

use clap::Parser;
use config::Config;


#[tokio::main]
async fn main() {

    dotenv::dotenv().ok();

    let config = Config::parse();

    println!("{:?}", config);

    let aws_config = 
        match config.aws_profile {
            Some(profile) => aws_config::from_env().profile_name(profile).load().await,
            None => aws_config::from_env().load().await,
        };

    let dynamodb_client = aws_sdk_dynamodb::Client::new(&aws_config);

    println!("dynamo client initialized");

}


Enter fullscreen mode Exit fullscreen mode

Application logic

The application is a simple CRUD app without additional logic.

First I create dummy handlers for the endpoints. They will be defined in the new folder http/books.rs.

Let's start with basic models.




use axum::{extract::Path, Json};
use chrono::{Utc, DateTime};
use serde::{Serialize, Deserialize};
use uuid::Uuid;

// models

#[derive(Serialize, Deserialize)]
pub struct Book {
    id: Uuid,
    author: String,
    title: String,
    year: DateTime<Utc>,
    description: Option<String>,
}

#[derive(Serialize, Deserialize)]
pub struct BookInput {
    author: String,
    title: String,
    year: DateTime<Utc>,
    description: Option<String>,
}
//...


Enter fullscreen mode Exit fullscreen mode

I have a few new dependencies - chrono for working with dates, uuid for uuid, serde, and serde_json for serialization.



cargo add uuid -F serde,v4
cargo add chrono -F serde
cargo add serde -F derive
cargo add serde-json


Enter fullscreen mode Exit fullscreen mode

In real life, it doesn't make sense to define a year as a DateTime, but I wanted to see how this type would behave during serialization and deserialization.

Now I define handler functions. Axum provides a very convenient way to extract parameters from the path and payload from the body. At this point functions just return a dummy book to make sure, that serialization works properly.



// ...
// handlers

async fn get_book(Path(id): Path<Uuid>) -> Json<Book> {
    let book = Book {
        id,
        author: "John Doe".to_string(),
        title: "My Book".to_string(),
        year: Utc::now(),
        description: Some("This is my book.".to_string()),
    };
    tokio::time::sleep(std::time::Duration::from_millis(100)).await;

    Json(book)
}

async fn create_book(Json(input): Json<BookInput>) -> Json<Book> {
    let book = Book {
        id: Uuid::new_v4(),
        author: input.author,
        title: input.title,
        year: input.year,
        description: input.description,
    };
    tokio::time::sleep(std::time::Duration::from_millis(100)).await;

    Json(book)
}

async fn update_book(Path(id): Path<Uuid>, Json(input): Json<BookInput>) -> Json<Book> {
    let book = Book {
        id,
        author: input.author,
        title: input.title,
        year: input.year,
        description: input.description,
    };
    tokio::time::sleep(std::time::Duration::from_millis(100)).await;

    Json(book)
}

async fn delete_book(Path(_id): Path<Uuid>) -> () {
    tokio::time::sleep(std::time::Duration::from_millis(100)).await;

    ()
}
// ...


Enter fullscreen mode Exit fullscreen mode

Finally, I have a public function for creating a router related to the /books endpoint. In the second I will use it in the main function.



// ...
// router

pub(crate) fn router() -> axum::Router {
    axum::Router::new()
        .route("/books/:id", axum::routing::get(get_book))
        .route("/books", axum::routing::post(create_book))
        .route("/books/:id", axum::routing::put(update_book))
        .route("/books/:id", axum::routing::delete(delete_book))
}


Enter fullscreen mode Exit fullscreen mode

To let Rust know, that I created a new module in the project, I created mod.rs file with a single line



pub(crate) mod books;


Enter fullscreen mode Exit fullscreen mode

Currently, the structure of the projects looks like this:

Image description

In the main.rs I import the module and start my app.



// ...
pub mod http;
// ...

    let app = Router::new()
        .merge(http::books::router());
    let listener = tokio::net::TcpListener::bind("127.0.0.1:8080").await.unwrap();
    axum::serve(listener, app).await.unwrap();

 // ...


Enter fullscreen mode Exit fullscreen mode

Local test of the dummy endpoints

After running cargo run I can test my app locally. It looks good:

Image description

Test of the dummy endpoints in the cloud

Before moving forward with the application I would love to deploy it to AWS and check how it works. I create a very basic template.yml in the root folder of the project (and by creating I mean copying it from aws lambda adapter examples in the repository)



AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
  rust-axum-zip

  Sample SAM Template for rust-axum-zip

# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
  Function:
    Timeout: 3

Resources:

  ##### DynamoDB #####
  BooksTable:
    Type: AWS::Serverless::SimpleTable
    Properties:
      PrimaryKey:
        Name: id
        Type: String
      TableName: axum_books_table

  HelloWorldFunction:
    Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
    Properties:
      CodeUri: lambda-axum-server
      Handler: bootstrap
      Runtime: provided.al2
      Architectures:
        - x86_64
      Environment:
        Variables:
          RUST_BACKTRACE: 1
          RUST_LOG: info
          DYNAMO_TABLE_NAME: !Ref BooksTable
      Policies:
        - DynamoDBCrudPolicy:
            TableName: !Ref BooksTable
      Layers:
        - !Sub arn:aws:lambda:${AWS::Region}:753240598075:layer:LambdaAdapterLayerX86:18
      Events:
        Root:
          Type: HttpApi # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
    Metadata:
      BuildMethod: rust-cargolambda # More info about Cargo Lambda: https://github.com/cargo-lambda/cargo-lambda

Outputs:
  # ServerlessHttpApi is an implicit API created out of Events key under Serverless::Function
  # Find out more about other implicit resources you can reference within SAM
  # https://github.com/awslabs/serverless-application-model/blob/master/docs/internals/generated_resources.rst#api
  HelloWorldApi:
    Description: "API Gateway endpoint URL for Prod stage for Hello World function"
    Value: !Sub "https://${ServerlessHttpApi}.execute-api.${AWS::Region}.${AWS::URLSuffix}/"
  HelloWorldFunction:
    Description: "HelloWorld Lambda Function ARN"
    Value: !GetAtt HelloWorldFunction.Arn


Enter fullscreen mode Exit fullscreen mode

I deploy the stack with SAM. And call the endpoint.
Init duration is around ~120ms and execution takes another 100ms (which is a delay I've put inside the dummy handler function). To be honest it is a bit too good to be true because those numbers are very similar to running raw lambda function (just code for lambda handler) with the creation of the DynamoDB client.

Image description

And the hot lambda, as we expect, finishes under ~105 ms.

Plug real database

At this point, my expectations about performance are quite high. I wasn't sure how caching initialized dynamo client would work. From my observations it is similar to a traditional lambda handler - stuff initialized before launching the server is there for hot starts.

In axum, there is a way to "inject" context into the handler functions. I will use it to pass my dynamoDb client and the name of the table. Updated code in http/books.rs:



// ...

#[derive(Clone)]
pub struct AppContext {
    table_name: String,
    dynamodb_client: aws_sdk_dynamodb::Client,
}


pub(crate) fn router(table_name: String, dynamodb_client: aws_sdk_dynamodb::Client) -> axum::Router {

    let app_state = AppContext {
        table_name,
        dynamodb_client
    };

    axum::Router::new()
        .route("/books/:id", axum::routing::get(get_book))
        .route("/books", axum::routing::post(create_book))
        .route("/books/:id", axum::routing::put(update_book))
        .route("/books/:id", axum::routing::delete(delete_book))
        .with_state(app_state)
}

//...


Enter fullscreen mode Exit fullscreen mode

In main.rs I can now pass the client and table name as arguments to the function that initializes books' router



// ...
    let app = Router::new()
        .merge(http::books::router(config.dynamo_table_name, dynamodb_client));
//...



Enter fullscreen mode Exit fullscreen mode

I update handlers in books.rs using injected client




//...

async fn get_book(ctx: State<AppContext>, Path(id): Path<Uuid>) -> Json<Book> {

    let result = ctx.dynamodb_client.get_item()
        .table_name(&ctx.table_name)
        .key("id", aws_sdk_dynamodb::types::AttributeValue::S(id.to_string()))
        .send().await.unwrap();

    let item = result.item().unwrap();

    let book: Book = from_item(item.clone()).unwrap();

    Json(book)
}

// ...


Enter fullscreen mode Exit fullscreen mode

To make my life easier I've used a crate to serde Dynamo objects, so I can use the from_item function



cargo add serde_dynamo -F aws-sdk-dynamodb+1


Enter fullscreen mode Exit fullscreen mode

Once all handlers are updated, it's time to test them in the cloud.

Testing in the cloud #2

Image description

Most of this time is sending data over the ocean, for the lambda it took ~250 ms. It is pretty neat for the cold start, initializing dynamo client, and creating a record in Dynamo.

Image description

Hot lambda is much faster, which is not surprising at all. Updating, deleting, creating, and getting a single item takes 10-20 ms. Which is probably mainly networking between function and dynamo.

Next steps

As a next step, I need to add error responses and error handling (at this point app just panics if anything goes wrong).

Tracing is also missing, but I believe that this topic deserves a separate blog post.

In other words, the example I've created so far is unfinished, but it was enough to run a small proof of concept.

Summary

Putting the whole web app in the lambda function might sound a bit strange, as it contradicts the intuition. Why would we like to put in the function more code than needed to handle a specific event?

In my case, thanks to using axum and Lambda Web Adapter, it turned out, that the cost of spinning off the whole web app written in Rust is pretty low. From my perspective, this approach is worth further exploration.

Top comments (0)