It’s a hard thing to justify costs when running a startup or side project. It’s not too hard now to deploy your web application in a way that is serverless, there’s Amazon’s AWS Lambda and Google Cloud’s CloudRun which both let you have pretty good control over your web app and how it scales but what about the database? How do we have database storage that is pay per use instead of fixed costs.
We could use services like AWS’s DynamoDB and Google Firebase’s Firestore, they’re great little NoSQL solutions, but then that won’t work with Laravel’s Eloquent out of the box as both solutions require their own implementations to store data with and don’t use SQL query languages. In fact understanding them fully requires you gaining knowledge about a proprietary technology. These solutions are primarily cloud based and for some this is a problem because that means your data and application will be locked into only working with a particular cloud provider. We could work around this but your side project wants to be up and running quickly, not reinventing core components or learning new technologies just to save costs.
I will note, AWS since 2017 or so actually has a serverless database option but it’s still got a fixed cost, one that’s higher than running a small regular RDS database all month long, which sucks.
So instead a lesser known option is CockroachDB.
I suppose firstly I should backtrack a little incase you don’t know what I mean by Serverless. The simplest definition of something that is serverless is a service that does not have a dedicated machine, virtual or otherwise. Instead this service only executes when it gets a request. In the case of web applications, this happens when a HTTP request is made, booting up an instance of the application to then serve the request and then shutdown, costing you only for the resources used typically per the second. For a database this is a little more unique in that it handles queries on-demand instead. This means we’re paying for what we use and how often rather than the alternative which for a database typically means running an instance 24/7 wether it’s performing any functions or not.
This kind of flexibility makes great sense. Most businesses don’t need around the clock services and still operate with customers in set timezones who want to do the bulk of their business during 9am to 9pm. It’s always been the aim of scalable infrastructure to make the best of this and scale down during the off peak hours.
Now you understand the what and why, let’s look at the how by understanding what CockroachDB is.
CockroachDB is an open source relational database, implemented in Go and is compatible with Postgresql clients. That means it is compatible with Eloquent with no changes to our Laravel app needs no additional packages to use CockroachDB.
We can also use it locally in our projects unlike Firestore or DynamoDB which are harder to run locally and test with as accurately, there are also Docker Images available. That means we can consider hosting our own version in the future, should we need to, but also use it locally with something like Laravel Sail.
Essentially you only need to sign up for an account on the CockroachDB website. No credit card information is currently required to use their free tier and even when you do need to go beyond that you can set a billing limit for what you can afford. Costs breakdown into data and request units which is essentially computing power for your database queries. More complex queries use more request units.
Once you’ve got an account you now need to create a cluster. At this point in time you’ll need to select which cloud provider and data center you wish to have your database in. This is great if you also intend to use AWS or GCP for your project to run the actual Laravel application. Personally I’m a fan of GCP so I’ll be using their European service in Germany.
After this we only need provide a name for the cluster if we wish to.
Once the creation step has occurred we only need to grab the database details. If we head over to the Connection Parameters tab we’ll be able to configure our application.
All the connection details on the website fit nicely into our .env.
DB_CONNECTION=pgsql DB_HOST=<provided host name> DB_PORT=26257 DB_DATABASE=<provided database name> DB_USERNAME=<your sql username> DB_PASSWORD=<your password>
Now we’ve done that, we can see if everything is working with a quick run of the migration command.
php artisan migrate:fresh --seed
Now assuming that command has been successful and not thrown an error that’s it, you’ve now got yourself a serverless zero cost database which is ready to use.
Currently this service is in Beta, meaning it’s not a production ready service you should depend upon. That said if it’s a small project, I think the cost of a free cloud based database that’s so easy to use is worth it with that caution in mind.
Where it might be a pain for many though is there’s no snapshot feature. One of the big pluses to AWS’s RDS as a fully managed database is that it will take incremental snapshots of your database leaving you feeling secure about your data in the event of some kind of critical loss. That said there are ways of backing up manually as stated by their FAQ.
Another issue is infrastructure as code. As it stands now, there’s only a very simple dashboard to manage the database. As far as I know there’s no ability to create databases through APIs, or at least that I’m aware of. While most hobby projects won’t mind this, that could always be a problem down the line as you want to create new environments or script how your infrastructure
After a bit more testing with Laravel and trying to migrate an old project, I did find in fact that you may encounter the odd issue using CockroachDB with the Postgresql connection type. While for the most part CockroachDB is compatible three issues came up:
If you expect primary to be made in sequence (e.g. IDs increment by 1) then this won't work as CockroachDB creates unique IDs without a sequence by default
Changing columns on a table may not work in a migration because the migrations by default occur inside a transaction which is not supported by CockroachDB.
If you are using joins with Delete statements, these will fail. Join support seems fairly different in CockroachDB to Postgresql but you can effectively get around this by using subqueries instead.
To make this compatibility work I've recently created an alpha driver for Laravel and CockroachDB. The tests created for the package mirror those in the base Laravel Framework and are run against an instance of an actual instance of CockroachDB meaning using CockroachDB should be 100% compatible with Laravel.
Proceed with caution, ultimately this is your database and potentially your user’s data we’re talking about, no one wants to depend on something that won’t last and worse ties them into using a technology that won’t meet future needs, that said, don’t feel like you shouldn’t try something new. CockroachDB looks very promising in my eyes and being compatible directly with a common relational database reduces a lot of that risk. I’ve found many PHP/Laravel developers to favour MySQL and that it’s there go to, mostly because it’s what they know but in my experience Postgresql is actually a better system so moving to something like CockroachDB seems a bit of a no brainer.
Have you tried CockroachDB’s serverless offering? How are things looking? Feel free to let me know.
I’m Peter Fox, a software developer in the UK who works with Laravel among other things. Thank you for reading my article, I’ve got several more on both medium and dev.to. If you want to know more about me, head over to https://www.peterfox.me. I’m also now also Sponsorable on GitHub. If you’d like to encourage me to write more articles like this please do consider dropping a small one off donation.