DEV Community

Cover image for CryptoFlow: Building a secure and scalable system with Axum and SvelteKit - Part 0
John Owolabi Idogun
John Owolabi Idogun

Posted on • Edited on

CryptoFlow: Building a secure and scalable system with Axum and SvelteKit - Part 0

Introduction

Being a pragmatist, it's always intriguing to learn by building cool systems with the best tools currently available. The last time I did this, three series came out and it was fulfilling. Having had some break, I decided to build something with axum, "a rust web application framework that focuses on ergonomics and modularity". As usual, SvelteKit comes to our rescue at the front end. What are we building? That's a nice question!

We will be building CryptoFlow, a Q&A web service like Stack Overflow where a securely authenticated user can ask question(s) related to the world of cryptocurrency and others (securely authenticated too) can proffer answers. Another cool stuff about it is that on each question page, there will be "real-time" prices and price histories of all the tagged cryptocurrencies (limited to 4 tags per question). The home page may have some charts too. The data used were gotten from CoinGecko API service (if you use my referral code, CGSIRNEIJ, for a subscription I get some commissions). Authentication will be session-based and we'll have modular configuration with robust error handling.

The overall features of CryptoFlow include but are not limited to:

  • Secured and Scalable Session-Based Authentication System: Featuring robust security protocols, including email verification to ensure user authenticity. This system is designed to scale efficiently as user traffic increases.
  • Ergonomic and Modular CRUD Service for Optimal Performance: The system boasts an intuitive CRUD (Create, Read, Update, Delete) service, ensuring smooth and efficient data management. This is complemented by optimized performance, catering to high-speed interactions and data processing.
  • Modularized Code Structure with Self-Contained Components: The architecture of CryptoFlow is based on a modular design inherited from the previous series with actix-web, promoting maintainability and ease of updates. Each module is self-contained, reducing dependencies and enhancing the system's robustness.
  • Real-Time Cryptocurrency Data Integration: Leveraging the CoinGecko API(don't forget to use my code CGSIRNEIJ for your subscription), the platform provides real-time cryptocurrency data, enriching user experience with up-to-date market insights.
  • Secure and Efficient Data Storage: Utilizing modern database solutions (PostgreSQL & Redis), CryptoFlow ensures secure and efficient storage and retrieval of data, maintaining high standards of data integrity and accessibility.

NOTE: The program isn't feature-complete yet! Contributions are welcome.

A handful of the ideas come from the previous series with some slight modifications for better modularity. Error handling was also more robust. Without further ado, let's get into it!

Technology stack

For emphasis, our tech stack comprises:

  • Backend - Some crates that will be used are:

    • Axum v0.7 - Main backend web framework
    • tokio - An asynchronous runtime for Rust
    • serde - Serializing and Deserializing Rust data structures
    • MiniJinja v1 - Templating engine
    • SQLx - Async SQL toolkit for rust
    • PostgreSQL - Database
    • Redis - A storage to store tokens, and sessions etc.
    • Docker - For containerization
  • Frontend - Some tools that will be used are:

Assumption

A simple prerequisite is to skim through the previous series.

Source code

The source code for this series is hosted on GitHub via:

GitHub logo Sirneij / cryptoflow

A Q&A web application to demostrate how to build a secured and scalable client-server application with axum and sveltekit

CryptoFlow

CryptoFlow is a full-stack web application built with Axum and SvelteKit. It's a Q&A system tailored towards the world of cryptocurrency!

I also have the application live. You can interact with it here. Please note that the backend was deployed on Render which:

Spins down a Free web service that goes 15 minutes without receiving inbound traffic. Render spins the service back up whenever it next receives a request to process. Spinning up a service takes up to a minute, which causes a noticeable delay for incoming requests until the service is back up and running. For example, a browser page load will hang temporarily.

Its building process is explained in this series of articles.






Project structure

Currently, the backend structure looks like this:

.
├── Cargo.lock
├── Cargo.toml
├── migrations
│   ├── 20231230194839_users_table.down.sql
│   ├── 20231230194839_users_table.up.sql
│   ├── 20240101210638_qanda.down.sql
│   └── 20240101210638_qanda.up.sql
├── settings
│   ├── base.yml
│   ├── development.yml
│   └── production.yml
├── src
│   ├── lib.rs
│   ├── main.rs
│   ├── models
│   │   ├── mod.rs
│   │   ├── qa.rs
│   │   └── users.rs
│   ├── routes
│   │   ├── crypto
│   │   │   ├── coins.rs
│   │   │   ├── mod.rs
│   │   │   ├── price.rs
│   │   │   └── prices.rs
│   │   ├── health.rs
│   │   ├── mod.rs
│   │   ├── qa
│   │   │   ├── answer.rs
│   │   │   ├── answers.rs
│   │   │   ├── ask.rs
│   │   │   ├── mod.rs
│   │   │   └── questions.rs
│   │   └── users
│   │       ├── activate_account.rs
│   │       ├── login.rs
│   │       ├── logout.rs
│   │       ├── mod.rs
│   │       └── register.rs
│   ├── settings.rs
│   ├── startup.rs
│   ├── store
│   │   ├── answer.rs
│   │   ├── crypto.rs
│   │   ├── general.rs
│   │   ├── mod.rs
│   │   ├── question.rs
│   │   ├── tag.rs
│   │   └── users.rs
│   └── utils
│       ├── crypto.rs
│       ├── email.rs
│       ├── errors.rs
│       ├── middleware.rs
│       ├── mod.rs
│       ├── password.rs
│       ├── qa.rs
│       ├── query_constants.rs
│       ├── responses.rs
│       └── user.rs
└── templates
    └── user_welcome.html
Enter fullscreen mode Exit fullscreen mode

I know it's overwhelming but most of these were explained in the previous series and we will go over the rationale behind them as this series progresses.

Implementation

Step 1: Start a new project and install dependencies

We'll be using cryptoflow as the root directory of both the front- and back-end applications. In the folder, do:

~/cryptoflow$ cargo new backend
Enter fullscreen mode Exit fullscreen mode

As expected, a new directory with backend as a name gets created. It comes with Cargo.toml, Cargo.lock, and src/main.rs. Open up the Cargo.toml file and populate it with:

[package]
name = "backend"
version = "0.1.0"
authors = ["John Idogun <sirneij@gmail.com>"]
edition = "2021"

[lib]
path = "src/lib.rs"

[[bin]]
path = "src/main.rs"
name = "backend"

[dependencies]
argon2 = "0.5.2"
axum = { version = "0.7.3", features = ["macros"] }
axum-extra = { version = "0.9.1", features = ["cookie-private", "cookie"] }
bb8-redis = "0.14.0"
config = { version = "0.13.4", features = ["yaml"] }
dotenv = "0.15.0"
itertools = "0.12.0"
lettre = { version = "0.11.2", features = ["builder", "tokio1-native-tls"] }
minijinja = { version = "1.0.10", features = ["loader"] }
pulldown-cmark = "0.9.3"
regex = "1.10.2"
reqwest = { version = "0.11.23", features = ["json"] }
serde = { version = "1.0.193", features = ["derive"] }
serde_json = "1.0.108"
sha2 = "0.10.8"
sqlx = { version = "0.7.3", features = [
    "runtime-async-std-native-tls",
    "postgres",
    "uuid",
    "time",
    "migrate",
] }
time = { version = "0.3.31", features = ["serde"] }
tokio = { version = "1.35.1", features = ["full"] }
tower-http = { version = "0.5.0", features = ["trace", "cors"] }
tracing = "0.1.40"
tracing-subscriber = { version = "0.3.18", features = ["env-filter"] }
uuid = { version = "1.6.1", features = ["v4", "serde"] }
Enter fullscreen mode Exit fullscreen mode

The prime suspects are axum, argon2 (for password hashing), axum-extra (used for cookie administration), bb8-redis (an async redis pool), pulldown-cmark (for converting markdown to HTML), and others.

Step 2: Build out the project's skeleton

Building out the skeletal structure of the project is the same as what we had in the actix-web version apart from the absence of src/telemetry.rs:

~/cryptoflow/backend$ touch src/lib.rs src/startup.rs src/settings.rs

~/cryptoflow/backend$ mkdir src/routes && touch src/routes/mod.rs src/routes/health.rs
Enter fullscreen mode Exit fullscreen mode

src/lib.rs and src/settings.rs have almost same content. The updated part of src/settings.rs is:

// src/settings.rs
#[derive(serde::Deserialize, Clone)]
pub struct Secret {
    pub token_expiration: i64,
    pub cookie_expiration: i64,
}

/// Global settings for exposing all preconfigured variables
#[derive(serde::Deserialize, Clone)]
pub struct Settings {
    pub application: ApplicationSettings,
    pub debug: bool,
    pub email: EmailSettings,
    pub frontend_url: String,
    pub interval_of_coin_update: u64,
    pub superuser: SuperUser,
    pub secret: Secret,
}
Enter fullscreen mode Exit fullscreen mode

Based on this update, the .yml files have:

# settings/base.yml
---
interval_of_coin_update: 24
Enter fullscreen mode Exit fullscreen mode

There is a background task we will write that periodically fetches the updated list of coins from CoinGecko API. The interval with which the task runs (in hours) is what interval_of_coin_update holds.

# settings/development.yml
---
secret:
  token_expiration: 15
  cookie_expiration: 1440
Enter fullscreen mode Exit fullscreen mode

Those settings do exactly what their names imply. They store the expiration periods (in minutes) of the token and cookie respectively.

Next is src/startup.rs:

use crate::routes;

pub struct Application {
    port: u16,
}

impl Application {
    pub fn port(&self) -> u16 {
        self.port
    }
    pub async fn build(
        settings: crate::settings::Settings,
        _test_pool: Option<sqlx::postgres::PgPool>,
    ) -> Result<Self, std::io::Error> {
        let address = format!(
            "{}:{}",
            settings.application.host, settings.application.port
        );

        let listener = tokio::net::TcpListener::bind(&address).await.unwrap();
        let port = listener.local_addr().unwrap().port();

        tracing::info!("Listening on {}", &address);

        run(listener, settings).await;

        Ok(Self { port })
    }
}

async fn run(
    listener: tokio::net::TcpListener,
    settings: crate::settings::Settings,
) {
    // build our application with a route
    let app = axum::Router::new()
        .route(
            "/api/health-check",
            axum::routing::get(routes::health_check),
        )
        .layer(tower_http::trace::TraceLayer::new_for_http());

    axum::serve(listener, app.into_make_service())
        .with_graceful_shutdown(shutdown_signal())
        .await
        .unwrap();
}

async fn shutdown_signal() {
    let ctrl_c = async {
        signal::ctrl_c()
            .await
            .expect("failed to install Ctrl+C handler");
    };

    #[cfg(unix)]
    let terminate = async {
        signal::unix::signal(signal::unix::SignalKind::terminate())
            .expect("failed to install signal handler")
            .recv()
            .await;
    };

    #[cfg(not(unix))]
    let terminate = std::future::pending::<()>();

    tokio::select! {
        _ = ctrl_c => {},
        _ = terminate => {},
    }
}
Enter fullscreen mode Exit fullscreen mode

The build method is similar to the one in this series. What interests us is the content of the run function. Axum uses Router to channel requests and you could have a simple route like /api/health-check and all legitimate requests to that URL will be handled by the "handler" you point it to. Handlers are asynchronous functions which can take request extractors as arguments while returning responses to the client. The response must be convertible into a response. An example of a handler is the health_check located in src/routes/health.rs:

use crate::utils::SuccessResponse;
use axum::{http::StatusCode, response::IntoResponse};

#[tracing::instrument]
pub async fn health_check() -> impl IntoResponse {
    SuccessResponse {
        message: "Rust(Axum) and SvelteKit application is healthy!".to_string(),
        status_code: StatusCode::OK.as_u16(),
    }
    .into_response()
}
Enter fullscreen mode Exit fullscreen mode

This handler doesn't take an extractor but its response implements the IntoResponse. The SuccessResponse does implement it in src/utils/responses.rs:

use axum::{
    http::StatusCode,
    response::{IntoResponse, Response},
};
use serde::Serialize;

#[derive(Serialize)]
pub struct SuccessResponse {
    pub message: String,
    pub status_code: u16,
}

impl IntoResponse for SuccessResponse {
    fn into_response(self) -> Response {
        let status = StatusCode::from_u16(self.status_code).unwrap_or(StatusCode::OK);
        let json_body = axum::Json(self);

        // Convert Json to Response
        let mut response = json_body.into_response();

        // Set the correct status code
        *response.status_mut() = status;

        response
    }
}
Enter fullscreen mode Exit fullscreen mode

I have a struct that is serializable and I implemented IntoResponse for it. The into_response method uses axum::Json to serialize the struct into JSON which uses its into_response() to create a response. Since I wanted to be able to state the status code after each call, I used the status_mut to do this. You don't have to do it this way though.

You also get to specify the accepted HTTP method of the URL via axum::routing. To answer its name, modularity, Axum also supports nested routes as we'll see later in this series. Next is the layer, a method used to apply tower::Layer to all routes before it. This means that routes added after the layer method will not have such a layer applied to their requests. In our case, we used the layer to add tracing to all HTTP requests and responses to our routes. This is needed for proper logging. The tower_http::trace::TraceLayer can even be really customised.

Having created an app instance, it's now left to serve it. In this case and in axum version 0.7, we used the serve method to supply the address we want our app to listen to for requests and also ensure we have a graceful shutdown of the application. Graceful shutdown was introduced in the latest version and it's quite nifty! If you go through my series on building stuff with go, I explained more about the graceful shutdown of application processes. Especially concurrent applications that spawn many threads like our application. The code for the shutdown signal was taken from this example.

With that explained, our application can't still serve any requests. This is because Rust's applications have main as their entry points. We also need a main function to serve as our application entry point. Therefore, edit src/main.rs to be like this:

use tracing_subscriber::{layer::SubscriberExt, util::SubscriberInitExt};

#[tokio::main]
async fn main() -> std::io::Result<()> {
    dotenv::dotenv().ok();

    let settings = backend::settings::get_settings().expect("Failed to read settings.");

    let subscriber_format = if settings.debug {
        tracing_subscriber::EnvFilter::try_from_default_env().unwrap_or_else(|_| {
            "backend=debug,tower_http=debug,axum::rejection=debug,sqlx=debug".into()
        })
    } else {
        tracing_subscriber::EnvFilter::try_from_default_env().unwrap_or_else(|_| {
            "backend=info,tower_http=info,axum::rejection=info,sqlx=info".into()
        })
    };

    tracing_subscriber::registry()
        .with(subscriber_format)
        .with(tracing_subscriber::fmt::layer())
        .init();

    backend::startup::Application::build(settings, None).await?;

    Ok(())
}
Enter fullscreen mode Exit fullscreen mode

Axum uses tokio as a runtime and it was built on top of it. It is also being maintained by the tokio team with David Peterson, the creator, leading the park. Therefore, our application uses the tokio runtime exclusively! In the function, we first bring our .env into the application state using dotenv. We then loaded the settings we defined before now. Using the debug attribute of the Settings struct, we state how deep the tracing or logging should be for our application(backend), tower_http and sqlx. For production, we will change the format of the logs to JSON for easy parsing. Lastly, we used the build method defined in src/startup.rs to serve the application. It's time to test it out!

Open up your terminal and issue the following command:

~/cryptoflow/backend$ cargo watch -x 'run -- --release'
Enter fullscreen mode Exit fullscreen mode

I used cargo-watch here so that every time my source changes, the server will automatically restart and re-serve the updated code.

If you navigate to your browser now and visit http://127.0.0.1:8080/api/health-check, you should see something like:

{
  "message": "Rust(Axum) and SvelteKit application is healthy!",
  "status_code": 200
}
Enter fullscreen mode Exit fullscreen mode

Yay!!! It's awesome!

Step 3: Project's SQL relations

As stated, CryptoFlow will support user authentication and authorization and questions and answers management. We need to persist these data and to do this, we need SQL relations. We're building the application with PostgreSQL. Since we use SQLx, our data schema needs to be inside the migrations folder at the root of of project (in the same level as src). Create this folder and then issue the following command:

~/cryptoflow/backend$ sqlx migrate add -r users_table
Enter fullscreen mode Exit fullscreen mode

This will create two .sql files inside the migrations folder due to the -r argument (if you don't want two files, you can effectively omit the argument). One of the files, .up.sql, should have the table relations while .down.sql should be able to reverse what .up.sql file does effectively.

The contents of the files are:

-- migrations/*_users_table.down.sql
-- Add down migration script here
DROP TABLE IF EXISTS users;
Enter fullscreen mode Exit fullscreen mode
-- migrations/*_users_table.up.sql
-- Add up migration script here
CREATE TABLE IF NOT EXISTS users(
    id UUID NOT NULL PRIMARY KEY DEFAULT gen_random_uuid(),
    email TEXT NOT NULL UNIQUE,
    password TEXT NOT NULL,
    first_name TEXT NOT NULL,
    last_name TEXT NOT NULL,
    is_active BOOLEAN DEFAULT FALSE,
    is_staff BOOLEAN DEFAULT FALSE,
    is_superuser BOOLEAN DEFAULT FALSE,
    thumbnail TEXT NULL,
    date_joined TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX IF NOT EXISTS users_id_is_active_indx ON users (id, is_active);
CREATE INDEX IF NOT EXISTS users_email_is_active_indx ON users (email, is_active);
Enter fullscreen mode Exit fullscreen mode

As can be seen, we created a relation, users, with 10 attributes (columns) with various domains (data types). A user should have a unique identification and an email. For authentication, the user's HASHED password should also be saved (BEWARE of saving plain password!!). We also want to have basic user information such as first_name, last_name, and thumbnail (user's picture). For authorization, we have three flags:

  • is_active: Just registering on our application is not enough as there might be some malicious users who use emails that don't exist to do that. When a user registers, we default this field to false. As soon as a user correctly provides a token sent to their email, we set it to true. Before you can login into our app, you MUST be an active user!
  • is_staff: In a typical system serving a company, there are some elevated users who may be the company's members of staff who have certain privileges above the users of the company's product. This field helps to distinguish them.
  • is_superuser: This is reserved for the creator of the company or application. Such a person should have pretty great access to things. This shouldn't be abused though.

We also created indexes for the table. Since we'll be filtering the users frequently with either id and is-active or email and is_active combination, it's good to have them as indexes to facilitate getting them quickly by minimizing I/O operations and catalyzing efficient access to the storage disk. A properly indexed lookup (using B+-tree which most DBMSs use) is fast ( O(log(N))O(log(N)) for NN tuples (rows)). It comes at a cost though. Inserting data into the table will be impacted a bit.

Next is the qanda relation:

~/cryptoflow/backend$ sqlx migrate add -r qanda
Enter fullscreen mode Exit fullscreen mode
-- migrations/*_qanda.down.sql
-- Add down migration script here
DROP TABLE IF EXISTS question_tags;
DROP TABLE IF EXISTS tags;
DROP TABLE IF EXISTS answers;
DROP TABLE IF EXISTS questions;
DROP FUNCTION IF EXISTS trigger_set_timestamp();
Enter fullscreen mode Exit fullscreen mode
-- migrations/*_qanda.up.sql
-- Add up migration script here
-- Trigger function to update the timestamp on the 'questions' table
CREATE OR REPLACE FUNCTION update_questions_timestamp() RETURNS TRIGGER AS $$ BEGIN NEW.updated_at = NOW();
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Trigger function to update the timestamp on the 'answers' table
CREATE OR REPLACE FUNCTION update_answers_timestamp() RETURNS TRIGGER AS $$ BEGIN NEW.updated_at = NOW();
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Questions table
CREATE TABLE IF NOT EXISTS questions (
    id UUID NOT NULL PRIMARY KEY DEFAULT gen_random_uuid(),
    title TEXT NOT NULL,
    slug TEXT NOT NULL UNIQUE,
    content TEXT NOT NULL,
    raw_content TEXT NOT NULL,
    author UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
    created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
    updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX IF NOT EXISTS questions_index_title ON questions (title, slug);
CREATE TRIGGER update_questions_timestamp BEFORE
UPDATE ON questions FOR EACH ROW EXECUTE PROCEDURE update_questions_timestamp();
-- Answers table
CREATE TABLE IF NOT EXISTS answers (
    id UUID NOT NULL PRIMARY KEY DEFAULT gen_random_uuid(),
    content TEXT NOT NULL,
    raw_content TEXT NOT NULL,
    author UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
    question UUID NOT NULL REFERENCES questions(id) ON DELETE CASCADE,
    created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
    updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE TRIGGER update_answers_timestamp BEFORE
UPDATE ON answers FOR EACH ROW EXECUTE PROCEDURE update_answers_timestamp();
-- Tags table
CREATE TABLE IF NOT EXISTS tags (
    id VARCHAR(255) NOT NULL PRIMARY KEY,
    name VARCHAR (255) NOT NULL,
    symbol VARCHAR (255) NOT NULL
);
CREATE INDEX IF NOT EXISTS tags_index_name ON tags (name);
CREATE INDEX IF NOT EXISTS tags_index_symbol ON tags (symbol);
-- Question tags table
CREATE TABLE IF NOT EXISTS question_tags (
    question UUID NOT NULL REFERENCES questions(id) ON DELETE CASCADE,
    tag VARCHAR(255) NOT NULL REFERENCES tags(id) ON DELETE CASCADE,
    PRIMARY KEY (question, tag)
);
Enter fullscreen mode Exit fullscreen mode

The .up.sql is a bit involved unlike what we had previously. The file has two triggers, update_questions_timestamp and update_answers_timestamp, that automatically update the updated_at fields of the questions and answers relations whenever there is an update. I could use a single trigger function for this but I chose this for clarity. We defined a couple of relations:

  • questions: This table has 8 attributes that help manage the question(s) our app users create. It references the users table with the constraint that if a user, say a, authors two questions, say b and c, b and c get deleted as soon as a gets deleted. That's what ON DELETE CASCADE do! CASCADE is one of the ForeignKey constraints available in DBMSs. The full options are NO ACTION | CASCADE | SET NULL | SET DEFAULT, each having different effects. There content and raw_content attributes. The former stores the compiled markdown of the question while the latter stores the raw markdown. This will help to edit users' questions.
  • answers: With 7 attributes, this relation is meant to store answers to users' questions (hence it references the questions table using the same ForeignKey constraints as discussed above).
  • tags: This table stores tags (in our case, coins). The attributes are 3 in number and the data here will be gotten directly from the CoinGecko API. It will periodically be updated every day.
  • question_tags: This table is interesting. In that, it has only two attributes, both referencing other tables. The table mirrors Many-to-many relationship. This is because each question can have multiple tags (limited to 4 in our case, which will be enforced later) and each tag can be used by multiple questions.

Step 4: Application store

With the schemas designed, we need a modular way to talk to the database. This brings us to create a store module that does just that:

// src/store/general.rs
use sqlx::postgres::{PgPool, PgPoolOptions};

#[derive(Clone, Debug)]
pub struct Store {
    pub connection: PgPool,
}

impl Store {
    pub async fn new(db_url: &str) -> Self {
        match PgPoolOptions::new()
            .max_connections(8)
            .connect(db_url)
            .await
        {
            Ok(pool) => Store { connection: pool },
            Err(e) => {
                panic!("Couldn't establish DB connection! Error: {}", e)
            }
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

We defined a "clonable" and "debug-compatible" Store store and added a new method to aid easy database pool creation. We currently only allow 8 connections but this can be any reasonable number and can be made configurable. With this underway, we can modify our build method and allow our application's state to have access to the database pool so that any handler can access and use it:

// src/startup.rs
...

#[derive(Clone)]
pub struct AppState {
    pub db_store: crate::store::Store,
}

impl Application {
    pub fn port(&self) -> u16 {
        self.port
    }
    pub async fn build(
        settings: crate::settings::Settings,
        test_pool: Option<sqlx::postgres::PgPool>,
    ) -> Result<Self, std::io::Error> {
        let store = if let Some(pool) = test_pool {
            crate::store::Store { connection: pool }
        } else {
            let db_url = std::env::var("DATABASE_URL").expect("Failed to get DATABASE_URL.");
            crate::store::Store::new(&db_url).await
        };

        sqlx::migrate!()
            .run(&store.clone().connection)
            .await
            .expect("Failed to migrate");
        ...
        run(listener, store, settings).await;
        ...
    }
}

async fn run(
    listener: tokio::net::TcpListener,
    store: crate::store::Store,
) {
    ...
    let app_state = AppState {
        db_store: store,
    };
    // build our application with a route
    let app = axum::Router::new()
        ...
        .with_state(app_state.clone())
        ...
        ;

    ...
}
...
Enter fullscreen mode Exit fullscreen mode

In the updated code, we defined a clonable AppState struct. It needs to be clonable because with_state needs it. There are some ways around this though. In the build method, we tried to detect which environment (testing or normal) so that we could appropriately fetch the database URL needed to effectively connect to the database. We defined DATABASE_URL in our.env file and it looks like this:

DATABASE_URL=postgres://<user>:<password>@<host>:<port>/<database_name>
Enter fullscreen mode Exit fullscreen mode

Next, since we activated migrate feature in our SQLx installation, we allowed it to automatically migrate the database. This protects us from manual migration though it has its downside. Without migration, the designed schema won't be affected in our database.

We then passed the store to the AppState which gets propagated to the entire application.

That's it for the first article in the series!!! See y'all in the next one.

Outro

Enjoyed this article? I'm a Software Engineer and Technical Writer actively seeking new opportunities, particularly in areas related to web security, finance, health care, and education. If you think my expertise aligns with your team's needs, let's chat! You can find me on LinkedIn: LinkedIn and Twitter: Twitter.

If you found this article valuable, consider sharing it with your network to help spread the knowledge!

Top comments (0)