Cover Photo by David Watkis on Unsplash
We previously left off with our server able to handle static content, but that is about all. In order to store and retrieve confessions, our app needs to interact with a database. That’s where Diesel comes to our aid!
⚠ In order for Diesel to interact with a database, a database instance needs to already exist. Make sure you have access to a Postgres instance (local or cloud based, both work) before moving forward.
Housekeeping
- We begin with installing diesel_cli - a tool that helps us manage the database. As we only use Diesel for Postgres, we use the features flag to specify that:
cargo install diesel_cli --no-default-features --features postgres
- In the root folder of the project, create a .env file. At the top of the file add the DATABASE_URL property that Diesel will use to get the connection details of your Postgres instance.
- In the project root folder run diesel setup. Diesel will create a new database (confessions), as well as a set of empty migrations.
Using Migrations
With the database setup, it’s time to create the confessions table. Diesel uses a concept called migrations to track changes done to a database schema. You can think of migrations as a list of actions that you either apply to the database ( up.sql ) or revert ( down.sql ).
- Generate a new migration set for the confessions table by running the following command at the root of the project:
diesel migration generate confessions_table
This creates a new folder inside of the migrations folder that holds the new migration set (up/down.sql) for theconfessions table:
- To create the new table, cd to the new migration folder and add the following to the up.sql file:
- In down.sql we specify how to revert the migration (i.e., dropping the confessions table):
- Apply the new migration by running the following command:
diesel migration run
💡To revert the last migration run diesel migration redo .
- cd to the src folder to find a new file called schema.rs. This file contains the table definition created by Diesel that enables us to work with the database in a typesafe way.
With the housekeeping behind us, we proceed to establish a connection between our Rocket instance and Diesel.
Getting Connected
The first rule of working with a database is connecting to a database. A common method of connection between a database and an application is Connection Pool — a data structure that maintains active database connections (pool of connections) which the application can use at any point of time it needs.
Rocket, with its rocket_contrib crate (a crate that adds functionality commonly used by Rocket applications), allows us to easily set up a connection pool to our database using an ORM of our choice. In our case, that’s going to be Diesel.
- We begin with adding three dependencies to our cargo.toml file: diesel, serde, and rocket_contrib :
🔭 As we only need certain features from the above crates, we specify these features using the features property. In our case we only need Postgres support so we specify that in the features list for diesel and rocket_contrib.
- Add the following import statements to your main.rs file:
- Next, we configure the connection settings for our database. Create a new file named Rocket.toml in the root folder of the project with the following content:
At this point you are probably thinking to yourself: “Johnny WTF? this is exactly the same connection string we configured earlier in the .env file. Can’t we just use that environmental variable and be done with it?”
Of course you can! But I’ll touch on how to do that a little bit later. For now, let’s roll with Rocket.toml.
- Open your main.rs file and add a new unit-like struct called DBPool:
The database attribute is used to bind a previously configured database to a poolable type in our application.The database attribute accepts the name of the database to bind as a single string parameter. This must match a database key configured in Rocket.toml. The macro will generate all the code needed on the decorated type to enable us to retrieve a connection from the database pool later on, or fail with an error.
- Lastly we need to attach the database to our Rocket instance. We do that using the attach method of our Rocket instance. The attach method takes a fairing (think of that like a middleware) and attaches it to the request flow.
Append the attach method to the Rocket instance as follows:
🍲 Before we proceed, I’d like to take a quick (completely optional) detour and talk about how to use the connection string from the .env file instead of duplicating it with Rocket.toml. If you don’t feel like messing around with creating a database procedurally, feel free to skip over to the next section — Working With Models.
- If created earlier, delete the Rocket.toml file from the root folder of the project.
- dotenv is a crate that makes it super easy to work with environmental variables from a .env file. Add the dotenv crate as a dependency in your Cargo.toml file:
- In main.rs , refactor your rocket function as follows:
🔬So what do we have here?
- Line 3 : We call the dotenv function (from the dotenv crate) to load variables found in the root folder .env file into Rust’s environment variables.
- Line 5 : Using Rust’s standard library we load up the value of the DATABASE_URL variable.
- Lines 6–9 : We define a Map that holds two keys — url for the connection string and pool_size for the size of the connection pool.
- Line 10 : We create a Rocket config using Figment and add our database name (confessions_db) as a key to the databases collection. This closely resembles the Rocket.toml file and for a good reason — its basically the same thing just done programmatically instead of a toml formatted file.
- Line 12: Instead of initializing the Rocket instance using ignite, we use the custom method, passing on the Figment configuration object created on line 10.
Before moving on to creating the Database models, let’s build the project to make sure everything compiles as expected.
🏗️ When I tried to compile the project on my Macbook, I got a compilation error related to Diesel stating that I’m missing libpq. If you happen to get the same, follow these steps:
- Install libpq using homebrew: brew install libpq
- In the project root folder create a new folder named .cargo and inside of it create a new file called config with the following content:
- Run cargo build and enjoy.
Working With Models
To represent our database table in a type-safe way, we need to create a model (struct) that represents it. Think of a model as the link connecting your database table with your Rust code.
- In your src folder create a new file called models.rs. This file will be the home for the models we use in our project.
- We begin with the Confession model, which is used when querying the database. Add the Confession struct to models.rs :
Our struct looks identical to the Postgres table schema we created earlier (you can peek into schema.rs for a reminder on how it looks). But what is this Queryable attribute on top of it? It is a Diesel attribute that basically marks this struct as a READABLE result from the database. Under the hood, it will generate the code needed to load a result from a SQL query.
📢 The order of the fields in the model matters! Make sure to define them in the same order as the table definition in schema.rs .
- To save a confession to the database, we don’t need to specify the id property since it's auto-incremented on the database side. For this reason, we will create an additional model in our models.rs file called NewConfession:
We annotate this new model with the Insertable attribute so it can be used to INSERT data to our database. In addition, we also add the table_name attribute to specify which table this model is allowed to insert data to.
- Lastly, we add the schema and models modules to main.rs :
Handling API Requests
It’s time to add a new route handler to our Rocket instance to handle POST requests containing new confessions.
- In main.rs , add a new struct named ConfessionJSON, which represents the JSON data sent to us from the browser:
- Add a new struct named NewConfessionResponse which represents the JSON response we send back to the browser upon adding a new confession:
- Add a new POST route that will handle requests to /confession :
🔬 So what do we have here?
- Line 6 : We define the route using three attributes:
- post — The HTTP verb this route is bound to.
- format — The required content type of the request. In our case we are going to use application/json. Any POST request to /confession which does not have a content type of application/json will NOT be routed to the post_confession handler.
- data — The name of the variable the body will be bound to. In this example, I named the variableconfession(surrounded by < and >, which is a must), but you can name it anything you like, as long as you name it the same in the handler (line 2).
- Lines 7–10 : Here we define the handler for the /confession route. We pass confession as an argument (same variable from step 1’s data attribute) and set its type as ConfessionJSON wrapped by serde’s Json attribute. Serde will deserialize the request body’s JSON payload as a Rust struct (ConfessionJSON) giving us a typesafe way to access it. In addition to confession we get access to the database connection pool we created earlier, thanks to the attachment of it to our Rocket instance.
- Line 10 : The handler will return a Result containing either a JSON with an HTTP status of 201 (created) or an error (using a custom error that we will write next).
- Add the implementation for the post_confession handler:
Now this handler might seem scary (👻) in its current form, so let’s break it into smaller chunks:
- Lines 6–14 : We create a new confession by calling the run method on our connection pool using Diesel’s insert_into method. The method takes the table name as the first argument and then using the values method we pass a struct (of type NewConfession as that is our Insertable struct) with the data that needs to be saved. Finally, we call await on the run method as it is an asynchronous function.
- Lines 16–18 : We create a new NewConfessionResponse with the result of the insert item query (the new_confession variable).
- Line 20 : We return a Result with the newly created confession, wrapped with Created to return a status code of 201.
- If post_confession fails for whatever reason, it returns a CustomError error, that we need to create next. Inside the src folder, create a new file called error.rs and add the following content:
I won’t go into much detail on what’s happening here, but the main takeaways from this file are:
- We create an enum to hold different error types and decorate it using Failure’s Fail attribute (lines 7–11).
- We implement the From trait so we can support the diesel error type (lines 13–17).
- Rocket requires the response of a handler (be it an error or a valid response) to implement the Responder trait. We implement this trait on our CustomError to display an error (of diesel::result::Error) to the caller (lines 19–28).
- Our post_confession handler is now completed 🎉. Let’s mount it to our Rocket instance’s routes with a new base of /api :
- We are finally ready to test our new API! Run the app with cargo run and on a different terminal run the following curl:
If all went well you should get back a JSON response with the confession and its new ID.
That was quite a ride, wasn’t it? You’d be happy to know (or not) that adding the GET route — for getting a random confession out of Postgres — is a much simpler task:
- Add the new get_confession handler to main.rs :
Nothing really exciting happening here. We get a connection from the pool (line 5), use the confessions table (line 7) to query for a single random confession (line 1 defines the SQL’s RANDOM function, lines 8–10 build the query) and eventually returning a JSON of the confession (line 14).
- Mount the new get_confession handler to our Rocket instance’s routes:
- Launch 🚀 with cargo run and in another terminal window run this lovely curl:
Now, what is it that we got back? A random confession from Postgres that’s what!
And with that our API is completed. We have a Rocket web server running with two API endpoints (post and get confessions) and an additional route to handle static content. Let’s move on to the final task for our website — adding the presentation layer.
Top comments (0)