The cost of parsing http

ihucos profile image ihucos ・1 min read

When creating Simple Web Analytics I tried to be able to scale from the beginning. Coming from the Python world one thing I hear a lot is "The database is the bottleneck". But for that specific use case that turned out to not be true. Apparently, my application code seemed not to weight in into how many requests per second the server can handle so much. And in the end I had to switch to go to be able to accomplish what I find acceptable results.

Let me explain and compare numbers. With a hello world flask application and gunicorn I can get 3343.95 trans/sec. Trying an asynchronous (gevent.pywsgi) based wsgi to http adapter did not help, here the numbers are something like 1003.65 trans/sec. Now let's throw golang into the comparison. Before boring you too much I will do an really completely unfair comparison and jump straight to this number 8765.15 trans/sec. That is also including the whole application code. And now just for completeness here is my unperfect measurement of what a simple hello world in golang can do: 12465.77 trans/sec.

In conclusion for my specific use case - doing optimized queries on a redis database - the bottleneck is clearly handling the http requests and not the application code. I wonder what exactly takes so long (in comparison with the application code). Is it parsing HTTP, is it handling the sockets, is it something else? Id' like to know!

Posted on by:


Editor guide

12k rps is not great for golang hello world app. Could you share the link on this app source code in github? Do you log in each request?



This is the code: gist.github.com/ihucos/82199ac4adc...
To measure I run siege http://localhost:8080, wait like 10 to 15 seconds and hit Ctrl-C to see the results.

I am using a MacBook Pro Late 2013 wit an Intel Core i5-4258U CPU @ 2.40GHz. Operating system is Ubuntu 18.04.

I'd be glad to know how I can get more out of it!