This blog covers how I unlocked performance that allowed me to scale my backend from 50K requests → 1M requests (~16K reqs/min) on minimal resource...
For further actions, you may consider blocking this person and/or reporting abuse
Amazing blog, can you also share the code of the backend you made.
Also, another optimization that I would have added was to use Sqlc with pgx rather than gorm as sqlc gives the performance of raw query execution with proper idomatic go models.
Thanks, Achintya!
My next set of optimization is pushing the app beyond ~350 RPS, for which I might need to dump gorm and opt for something faster-ligher alternative like pgx.
Sorry, I cannot share the code for the backend as it is proprietary to my work.
Thanks for this blog post! A lot of good information here and I'm mostly curious around the database and data set.
How large was the initial data set you're working with or was it an empty database? At first I was thinking most operations were on a single table (incredibly simple CRUD) but when you mentioned the joins, my curiosity peaked on the DB schema.
How many tables are joined on the queries you were able to bring down to ~50ms initially?
Are those included in the ones that went back to 200-400ms?
I'm also curious on the database settings and query structure.
Do you have the fields being returned added to the index (in the correct order) to better utilize the machines memory, or would that make the index is too large?
Thanks again!
The CRUD operation I mentioned touches multiple tables for authentication, verification of integrity, actual operation, and post-processing tasks. Most of the tables had records < 10K but the entity on which operations were performed had ~500K records to start with and in the end it had > 1M records.
Just two but on fairly heavy tables > 500K and 100K
Yep, My suspicion for query taking 200-400ms is on the availability of open connections. As we have capped open connections to 300, the SQL driver might wait for the connection to free up. Going to sit on this to investigate the real reason, might just be slow query execution
We are using AWS managed Postgres service AWS Aurora, we are running this on base instance
db.t4g.medium
- 2 CPU, 4GB Ram, 64bit GravitonSorry, won't be able to share the query structure as this is proprietary work.
Yep, I do have, we suffer from slightly slow writes, but it is worth it as we are READ heavy app.
Great questions and Thanks for reading this, Steve!
couldn't have worded it better !!.
Thanks Nadeem :D
Thanks Nadeem!
Total Banger 🔥🔥🔥🔥🔥
Thanks for sharing
Thanks Peta, Glad you liked it :D
Thanks for sharing this Riken Shah!
Really Helpful
Thanks Kiran, Glad it was helpful :D
github.com/eranyanay/1m-go-websockets ;)
Yep, I've read this article, it's one of the best. I got to know about file descriptors after reading this :D
nice done! keep going on this excelent content!
thanks marcos, happy to see you liked it :D
Great!
Loved every bit of this. Well detailed and informative. Keep dropping them 🙌🏼
Thanks Samuel, going to write more :D
Thanks for the article
This was really helpful. Will try to implement some of it in our Go service as well
Thanks Harsh, Glad you enjoyed it :D
Really good article and value shared.
Thanks Antonio :)
Excellent content, very informative. I'm gonna learn Grafana after this.
Definitely, Grafana is too good.
Thanks, Leo for the kind words :)
good article
Thanks, Idorenyin ❤️
You mentioned you have strong transaction handling in your middle ware. How is this implemented? Is this in Go? Great article btw
Thanks for giving it a read Kalyan :D
Great, Thanks for sharing
can you share this git project so we can also contribute?
Sorry this is proprietary work, can't share!
Good post, thanks!
Thanks for giving it a read :D
Excellent article Riken, as a backend developer that just deployed a monolithic Node/express backend on AWS, I read every word. Our backend is so inefficient right now, the container wouldn't even load on a t2 micro we had to use a t3 medium (and we have 0 users right now so RDS databases were empty). I was trying to figure out where to even begin with optimization and you have provided so many things to try.
One question though on the nature of your app, you said it's read heavy. Is it a web app or mobile? We're building a social network and I'm wondering if GO would be the better option instead of Node. Appreciate any thoughts.
Great post Riken - especially for your first one! :🚀
Very much looking forward to reading about your observability/monitoring setup.
Great blog! I understand the code is proprietary to your work, but could you provide a sample repository that you made? It would be helpful for reference and learning. ❤️
What part of the equation does storage play on all this fun?
Amazing
Great writeup! 🔥 🔥 🔥
But the graph below it says it's 12-18K per second, not per minute. Same in the "million hits" section below it.
It's a time series graph, it start slow and racks up to 12-18K reqs