This blog covers how I unlocked performance that allowed me to scale my backend from 50K requests → 1M requests (~16K reqs/min) on minimal resource...
For further actions, you may consider blocking this person and/or reporting abuse
Amazing blog, can you also share the code of the backend you made.
Also, another optimization that I would have added was to use Sqlc with pgx rather than gorm as sqlc gives the performance of raw query execution with proper idomatic go models.
Thanks, Achintya!
My next set of optimization is pushing the app beyond ~350 RPS, for which I might need to dump gorm and opt for something faster-ligher alternative like pgx.
Sorry, I cannot share the code for the backend as it is proprietary to my work.
Thanks for this blog post! A lot of good information here and I'm mostly curious around the database and data set.
How large was the initial data set you're working with or was it an empty database? At first I was thinking most operations were on a single table (incredibly simple CRUD) but when you mentioned the joins, my curiosity peaked on the DB schema.
How many tables are joined on the queries you were able to bring down to ~50ms initially?
Are those included in the ones that went back to 200-400ms?
I'm also curious on the database settings and query structure.
Do you have the fields being returned added to the index (in the correct order) to better utilize the machines memory, or would that make the index is too large?
Thanks again!
The CRUD operation I mentioned touches multiple tables for authentication, verification of integrity, actual operation, and post-processing tasks. Most of the tables had records < 10K but the entity on which operations were performed had ~500K records to start with and in the end it had > 1M records.
Just two but on fairly heavy tables > 500K and 100K
Yep, My suspicion for query taking 200-400ms is on the availability of open connections. As we have capped open connections to 300, the SQL driver might wait for the connection to free up. Going to sit on this to investigate the real reason, might just be slow query execution
We are using AWS managed Postgres service AWS Aurora, we are running this on base instance
db.t4g.medium
- 2 CPU, 4GB Ram, 64bit GravitonSorry, won't be able to share the query structure as this is proprietary work.
Yep, I do have, we suffer from slightly slow writes, but it is worth it as we are READ heavy app.
Great questions and Thanks for reading this, Steve!
Excellent article Riken, as a backend developer that just deployed a monolithic Node/express backend on AWS, I read every word. Our backend is so inefficient right now, the container wouldn't even load on a t2 micro we had to use a t3 medium (and we have 0 users right now so RDS databases were empty). I was trying to figure out where to even begin with optimization and you have provided so many things to try.
One question though on the nature of your app, you said it's read heavy. Is it a web app or mobile? We're building a social network and I'm wondering if GO would be the better option instead of Node. Appreciate any thoughts.
Our backend powers our mobile app and web app.
NodeJS is definitely not a bad idea, I’ve worked on it before but golang provides so much stuff out of the box. The memory management is just next level, nodeJS wouldn’t be able to reach there without a lot of manual configuration.
I’d suggest write your core routes in golang hookup it with the RDS and then test.
Glad you liked the article :)
couldn't have worded it better !!.
Thanks Nadeem!
Thanks Nadeem :D
Thanks for sharing this Riken Shah!
Really Helpful
Thanks Kiran, Glad it was helpful :D
Great post Riken - especially for your first one! :🚀
Very much looking forward to reading about your observability/monitoring setup.
Amazing
Great writeup! 🔥 🔥 🔥
Gem 💎
Thanks for the article
This was really helpful. Will try to implement some of it in our Go service as well
Thanks Harsh, Glad you enjoyed it :D
Total Banger 🔥🔥🔥🔥🔥
Thanks for sharing
Thanks Peta, Glad you liked it :D
github.com/eranyanay/1m-go-websockets ;)
Yep, I've read this article, it's one of the best. I got to know about file descriptors after reading this :D
Great!
Loved every bit of this. Well detailed and informative. Keep dropping them 🙌🏼
Thanks Samuel, going to write more :D
Really good article and value shared.
Thanks Antonio :)
good article
Thanks, Idorenyin ❤️
nice done! keep going on this excelent content!
thanks marcos, happy to see you liked it :D
Excellent content, very informative. I'm gonna learn Grafana after this.
Definitely, Grafana is too good.
Thanks, Leo for the kind words :)
can you share this git project so we can also contribute?
Sorry this is proprietary work, can't share!
Good post, thanks!
Thanks for giving it a read :D
Interesting read Riken! Loved how you explained the whole process.
Looking forward to reading the obsevability article
You mentioned you have strong transaction handling in your middle ware. How is this implemented? Is this in Go? Great article btw
Thanks for giving it a read Kalyan :D
Great, Thanks for sharing
I completely forgot about the ulimits trick... My app is written in fiber and uses pgxpool through sqlc, and on my lapop throughput is bottlenecked by my reverse proxy at about 3000RPS, but the server itself was doing over 15KRPS the last time I checked, with almost no targeted optimization whatsoever.
There was also some ridiculously low latency for most requests.
I haven't measured for a couple months, so I'll do i again and let you know.
As much as this much perf is useless, it's a lot of fun! And I won't have to worry about scaling, ever
What part of the equation does storage play on all this fun?
Great blog! I understand the code is proprietary to your work, but could you provide a sample repository that you made? It would be helpful for reference and learning. ❤️
Amazing 👏
Is that the bag theme in the code given...? 🤔
Amazing article, well done
But the graph below it says it's 12-18K per second, not per minute. Same in the "million hits" section below it.
It's a time series graph, it start slow and racks up to 12-18K reqs
It's a great blog that I read recently and I am waiting for the blog on Observability built in your backend
Gnarly work here.