DEV Community

Cover image for + Turso = Scalable, Multi-Region, Low-Latency App That Will Not Cost You A Kidney.
Mateusz Piórowski
Mateusz Piórowski

Posted on • Updated on + Turso = Scalable, Multi-Region, Low-Latency App That Will Not Cost You A Kidney.

The problem

In my years of coding experience, I've explored various deployment methods, from simple servers to edge networks to serverless setups, all in search of the right fit.

Each option has its pros and cons, and sometimes it comes down to personal preference.

However, there's a category of apps that has always given me trouble finding the right solution:

Frontend + Backend + Database

Mostly CRM or SaaS applications. We all love them. So let's focus on them.

After 28 days of discussion, 12 scrum meetings, and 4 Kanban boards, we've finally settled on the tech stack for the project:

  • Frontend - React
  • Backend - Java
  • Database - MySQL

This stack might not come as a surprise to anyone - it's what you might call an "oldie" stack, reminiscent of what was popular around 5 years ago. Nowadays, you could swap React for NextJS, Java for Rust, and MySQL for PostgreSQL, and you can start a YouTube channel.

So it's time to deploy it.

The fancy one

Let's kick things off with the "modern" solution that you've probably heard about many times: Edge computing. The core concept here is to bring computing resources as close to the user as possible.

For the client-side, we have options like Vercel, Netlify, or my personal favorite, Cloudflare Pages. Very easy to deploy, very fast and very scalable. WHEN we are deploying static sites (or JAMstack).

But we're not deploying something static, right? We need dynamic data, interactive charts, tables, as well as the ability to save and load data.

So let's use edge for the server as well! AWS Lambda, or maybe something fancier like the new Hono JavaScript framework (because, let's face it, we all love exploring new next JS frameworks).

Problem solved! We're fast, we're modern.

...but didn't we forget about something? The database.

What's the big deal? Let's purchase AWS RDS, and we're all set.

Take a look at this picture: the top one depicts the full edge solution, while the bottom one has a static server. Which one do you think takes longer?

Edge server vs static server

The biggest problem with a full edge solution is that most of the time, your database remains in one location. And the majority of the traffic occurs between the database and your servers, even with multi-region databases.

1. Servers must be located as close as possible to the database.

The popular one

Now, let's shift our focus to the most familiar deployment options: AWS EC2, Google Compute Engine, Azure Virtual Machines.

But why not go with them? Can't we auto-scale? Make servers multi-regions?

We can.

The problem? It's HARD and not cheap. And it requires expertise. AWS EC2 that can autoscale? With load balancers? Easy deployment via CI/CD? Good luck with that, try not to kill yourselves.

2. The infrastructure must be easy to deploy, scale and manage.

The container one

Now, let's enhance the previous solution by incorporating containers into the mix = serverless container platforms.

The three most popular options in this realm are:

  • AWS Fargate / App Runner
  • Azure Container Apps
  • Google Cloud Run

With these platforms, you deploy your application using Dockerfiles. Among the big three, I've personally had the most experience with AWS and GCP. While some of Google Cloud's services may be subpar or lacking, Google Cloud Run is amazing.

It offers default autoscaling, seamless deployment (often just requiring a pointer to your Dockerfile), and the ability to scale down to zero to minimize costs (unlike AWS, god knows why).


So what about the database?

AWS RDS, Azure SQL or Google Cloud SQL are indeed easy to set up, even with multi-region support.

The problem? They'll cost you a kidney, maybe even two. And we don't want that.

3. The database must not cost us a kidney.

So, let's explore new contenders in the market:

For MySQL, we've got PlanetScale, and for PostgreSQL, there's Neon.

These are fully managed database platforms that offer autoscaling, database branching, replicas, and more, all within a few clicks.


Please remember that this is a PERSONAL OPINION :)

So let's summarize our points:

1. Servers must be located as close as possible to the database.
2. The infrastructure must be easy to deploy, scale and manage.
3. The database must not cost us a kidney.

Now, who will be the final winner?

Amazon EC2 with RDS PostgreSQL?

Or perhaps Google Cloud Run with PlanetScale MySQL?




None of them (you saw that coming).

So, while it's no surprise that I lean towards the serverless container solution, the AWS / GCP still require a lot of tweaking and configuration for seamless CI/CD deployments.

And as for PlanetScale and Neon, the pricing is indeed great...initially...when we have low traffic. However, as the traffic increases, our wallets begin to empty.

So who then?

Despite my skepticism towards "new" technologies (we all know that they die like flies these days), here I am choosing them. + Turso

So what sets them apart from the rest?

Let's begin with, and I'm not exaggerating when I say that if you have a Dockerfile ready, it provides THE BEST experience among serverless container platforms.

With just one command, you can deploy your app and enjoy amazing logging, scaling, insights, and pricing.

Now, let's talk about Turso, which utilizes LibSQL, a fork of SQLite.


SQLite. Yes, that SQLite. Yes, for production. Yes, I am tired of telling people that the notion "SQLite is not for production" is a stupid, outdated myth.

I will just drop this here and move on.


Turso - The fastest way I've ever created a database with replicas. They are incredibly affordable and lightning-fast.

And did I mention that they offer an option to have an embedded database inside your devices? For mobile? That's simply amazing.

Furthermore, both of these teams feel like genuine people, admitting to their problems and acknowledging their shortcomings. And this is truly one of the main reasons I am investing in them, even though they are new :)

Now all that's left is a client. To be perfectly honest, if your client side application can be deployed on edge, go for it.

In our case, we use gRPC, so we cannot.

Also, it's important to remember that there might be a very slight performance downgrade, mostly because even though the edge deploys the closest to you, it may still not be the shortest route for the traffic:

client on edge vs client closest to server


I'll keep it brief. All the concepts have already been implemented in one of my open-source projects.:


It stands for Svelte + Go + SQLite + gRPC. If you wonder why these technologies, please refer to the project README :).

Let's start:

  1. Create an account on Turso.
  2. Create first primary database.
  3. Create 2 replicas.
  4. Generate authorization token.

Turso databases

This gives us a nice spread for the whole world.

Turso spread

  1. Create an account on
  2. Launch Your apps with fly launch --no-deploy.
  3. Tweak the environmental variables and set your secrets with fly secrets set.
  4. Deploy with fly deploy.
  5. Scale to replicas regions fly scale count 4 --region dfw,hkg.

Fly machines

You may have noticed something here. The region names are the same for and Turso.

And that's because it's yet another benefit of using these two together: Turso utilizes the infrastructure, which means you place your database as close as possible to the server.

And with that, we're all set :). Scalable, multi-region, affordable, and easy to maintain and deploy.

To back this up, this stack is currently employed for the majority of our clients at Solace. And it works amazingly.

Things to remember

Here, I'd like to highlight two important points we need to keep in mind.

First, Turso replicas, like most other database replicas, are READ replicas. This means you get incredible speed when reading rows, but all writes still go to the primary location.

This can be a problem for heavy-write applications, which 99% of apps are NOT.

Secondly, offers an option to scale down automatically. However, you can set min_machines_running = 1 to reduce cold starts. The challenge is that this setting is global, meaning it will shut down all your replicas and leave only the primary location running.

It would be awesome if they could make it PER REGION. Hello, please make it happen :).


Hope you enjoyed it!

Like always, a little bit of self-promotion :) Follow me on Github and Twitter to receive new notifications. I'm working on promoting lesser-known technologies, with Svelte, Go and Rust being the main focuses.

Top comments (2)

xl5 profile image
Andy R

That sounds pretty niche what sort of database traffic are you expecting or is the auto scaling required more for resilience?

mpiorowski profile image
Mateusz Piórowski • Edited

To be honest, we are mostly not afraid of traffic, it can hold thousands of connections without a problem. The biggest benefits for us is that we've got users from around the globe, and now when they are calling from LA to London, then the latency is very noticeable.

This setup negates it all. So multi-region. Plus some additional machines just for safety if one stopped.

But still, even if you don't need it, this setup provides so much more benefits. It's cheap as hell. And easy to maintain. The logging is amazing, the CI/CD is just a few lines. And, if in the future You WILL need scale, you've got it :)