Welcome back! The next thing we'll focus on fixing up is our connection to postgres. Currently anytime we want to talk to our database we initiate a connection via our establish_connection function. Instead we're going to use connection pooling!
Connection Pooling
Database connections aren't cheap, they take time to set up and so opening a new connection every time we want to do a query is a little bit wasteful. With connection pooling, we would maintain a number of connections to our database that our requests can then use.
This speeds up things quite a bit!
The first thing we need to do is include the generic connection pooling crate, r2d2 and then we need to enable r2d2 for diesel.
./Cargo.toml
...
diesel = { version = "1.4.4", features = ["postgres", "chrono", "r2d2"] }
...
r2d2 = "0.8"
...
Once we have our connection pooling crate and feature set, we can now set up a pool in our main.rs file.
The first thing we'll do is include the r2d2 connection pooling crate.
./src/main.rs
...
use diesel::prelude::*;
use diesel::pg::PgConnection;
use diesel::{r2d2::ConnectionManager};
type Pool = r2d2::Pool<ConnectionManager<PgConnection>>;
...
Here we added the ConnectionManager class and r2d2 as well. We also added a type alias meaning that the keyword "Pool" will now mean that specific type. This is because we'll need to use the type in a few places and having to type it all out would be painful.
Now let's set up our connections.
./src/main.rs
...
#[actix_web::main]
async fn main() -> std::io::Result<()> {
dotenv().ok();
let database_url = std::env::var("DATABASE_URL")
.expect("DATABASE_URL must be set");
let manager = ConnectionManager::<PgConnection>::new(database_url);
let pool = r2d2::Pool::builder().build(manager)
.expect("Failed to create postgres pool.");
env_logger::init();
HttpServer::new(move || {
let tera = Tera::new("templates/**/*").unwrap();
App::new()
.wrap(Logger::default())
.wrap(IdentityService::new(
CookieIdentityPolicy::new(&[0;32])
.name("auth-cookie")
.secure(false)
)
)
.data(tera)
.data(pool.clone())
.route("/", web::get().to(index))
...
After a help comment from below, I've updated this chunk of code to be more in line with the way Actix does things. Originally the pool creation was happening inside our actix thread, this mean that we were creating a pool of database connections for each actix thread. Actix will create a thread for each core you have and the commenter below had 16 cores which meant 16 actix threads got started. Each pool contains 10 connections. This meant that they would be using 160 connections to postgres which would exceed the maximum of 100.
My PC had only 4 cores so it was well under the limit which is why I didn't run into the same issue!
The correction was to move the pools out of the anonymous function that way the pools are shared across threads, so we will have 16 actix threads, sharing 10 connections. Much better!
We set up our connection pooling outside our anonymous function so that all our actix threads can share the same pool of connections.
We start by using dotenv to set up our environment. We then read in the database_url from our environment.
Next we set up a ConnectionManager using our database_url. This is very much like what we do in our establish_connection function.
Next we build a pool of connections using our ConnectionManager. This is the pool of connections we can draw from. Every time we call pool.get(), that returns a connection to postgres that we can use.
We do a move in our anonymous function because we want to shift ownership from our main function to our anonymous function, this way when the threads get started, they will each own the connection pool.
The next step is the part where we register the pool with our App. This is done via the data call just like with tera. This means that like our template engine, we can now pass this pool object and make it available to all of our route handling functions.
The thing to note here is that we do a pool.clone() in our data call. This is because we want a copy of the pools for each of our threads.
Let's update our index function to use our new pool object!
./src/main.rs
...
async fn index(tera: web::Data<Tera>, pool: web::Data<Pool>) -> impl Responder {
use schema::posts::dsl::{posts};
use schema::users::dsl::{users};
let connection = pool.get().unwrap();
let all_posts :Vec<(Post, User)> = posts.inner_join(users)
.load(&connection)
.expect("Error retrieving all posts.");
let mut data = Context::new();
data.insert("title", "Hacker Clone");
data.insert("posts_users", &all_posts);
let rendered = tera.render("index.html", &data).unwrap();
HttpResponse::Ok().body(rendered)
}
...
Here we add out pool as a variable we pass into our index function. This is why we made the type alias, had we not created a Pool type alias, we would have had to type out pool: web::Datar2d2::Pool<ConnectionManager<PgConnection>> everywhere.
Next, we change our connection variable from "establish_connection()" to "pool.get().unwrap()". With that we are done! We have connection pooling now set up.
We can no do the same thing for all of our establish_connection calls. We first add the pool variable to our function parameters and then we swap our the connection variable.
With that, we should see no visible difference in our website! but we'll know in our hearts that connection pooling is up and working!
In the next chapter we'll work on some error handling, get ready!
Top comments (2)
Create the connections pool inside the anonymous function (the argument of
HttpServer::new()
) actually create a pool for each HttpServer worker thread. On my server, this behavior resulted in16 pools x 10 connections per pool = 160 connections
.The
16
is from actix-web:My CPU has
16
hyper-threads.10
is the default size of each pool which can be changed viamax_size()
during construction.That attempt to get
160
connections to PostgreSQL exceeded the default constraintmax_connections = 100
. So I got panic messages shouting "Sorry, too many clients already".I would suggest to follow the implementation from the
actix
example. Basically, it creates the pool outside the HttpServer construction.Updated, thanks!