Have you ever load tested your APIs on the cloud? In this post we explore how to load test and benchmark different RESTful framework's performance
Recap ๐ค
In our previous post, we did some benchmarks of frameworks from different languages. Our test hardware/server was my Raspberry Pi 3 Model B from 2016, it was a good experiment, but this time around we needed something more realistic.
TL;DR - We used Kubernetes ๐ on Cloud (Google Cloud Platform) -> jump to benchmarks section.
Let's get back to our story if are following the storyline from the first post ๐
Dave went back to his peers to show benchmark results that he had got in the initial tests which he ran on his Raspberry Pi 3, while some of his peers liked the idea and appreciated the outcome, some pointed out that they would need the test on real production-grade hardware to believe the results! Dave went back to his garage to tweak his test bench.
Intro
Our setup is quite straightforward, each RESTful service gets the same amount of CPU and memory (enforced via k8s config). The entire setup takes few minutes to initialize, thanks to tools like gcloud CLI, Terraform & Pulumi. You can get it up and running with the environment without much hassle. And if you want to run the benchmark without fancy infra (i.e. without private VPC etc.). We recommend that you use the CLI wrapper as it is built over gcloud SDK and for the adventurous type, we have a slightly more elaborate setup with Terraform (GCP) & Pulumi (GCP & Digital Ocean).
Environment Review
Kubernetes is a planet scale tool that can orchestrate containerized applications and more.
Checkout our series on Kubernetes it is swell! ๐
Since we didn't want to scale the application as load increases, we have put some limits. The config ensures that the deployments stay put and do not auto-scale in the K8s cluster. The whole point of this exercise is to simulate a prod environment (but without auto-scaling). We then load test and measure performance.
Quite the step up from the test on Raspberry Pi 3, isn't it? ๐
It took us a while to figure out the right configuration for the cluster so that you could replicate the tests on your own with the optimal amount of resources. The K8s environment can be setup on GCP free tier (at the time of writing this article)
Source code link for this entire project is given the references section! ๐
Let's review our K8๏ธโฃs Config file
Deployment config looks like this - ๐
apiVersion: apps/v1
kind: Deployment
metadata:
name: rest-net-http-golang
spec:
selector:
matchLabels:
app: rest-net-http-golang
template:
metadata:
labels:
app: rest-net-http-golang
spec:
containers:
- name: rest-net-http-golang
image: ghcr.io/gochronicles/benchmark-rest-frameworks/rest-net-http-golang:latest
imagePullPolicy: IfNotPresent
resources:
limits:
memory: "1Gi"
cpu: "500m"
ports:
- containerPort: 3000
Notice that we have allocated memory - 1Gi and CPU 500m or half vCPU. This constraint is given to all frameworks, this will ensure that the amount of compute given to each deployment is consistent.
apiVersion: v1
kind: Service
metadata:
name: rest-net-http-golang
spec:
type: LoadBalancer # provide public ip to the service
selector:
app: rest-net-http-golang
ports:
- port: 80
targetPort: 3000
Service config exposes the RESTful apps in the cluster network via Public IP so that our bench-marking client can connect and run load simulations.
Our attack tool for load testing๐พ
This time around, we decided to play around with different benchmark & load testing tools. Finally, we chose Hey.
Hey is a drop-in replacement for ab tool (Apache benchmark).
hey -c 800 -n 35000 <http://ip-addr-url/>
This command will send 800 concurrent & total 35k requests to the RESTful API services on the K8s cluster.
Honorable mentions -
- Locust! ๐ฉ๏ธ- This was the ideal tool for this test for couple of important reasons. We could deploy this python based web app like load test tool on K8s cluster and run benchmarks from with in the cluster network (no need for public IP) It comes with a nice UI dashboard to visualize the results. The test results was same across frameworks, it looked like we couldn't schedule enough number of workers to really push the throttle on the RESTful APIs. We had a limit on the number of processors we could deploy on our GCP instance (free tier has 8CPU limit for the entire project) If you want to tinker with locust, here's the k8s config we created.
- Apache Benchmark - Good old tool we could still probably use, but the results were better and faster with hey and it shares the almost similar CLI options. CPU monitoring tool (htop) revealed that ab tool didn't take advantage of all the CPU cores, where as hey tool fired up on all CPU cores with same parameters out of the box. Benchmarks ๐ The order of slowest to fastest framework is as expected in the benchmark results. Go frameworks are at a minimum 10x faster than Node & Python-based frameworks. However, the interesting bit is FastAPI (Python framework) isn't too far off from NestJS (which is about ~12% faster).
FastAPI (Python)
NestJS (Node)
ExpressJS (Node)
Gin (Golang)
Net-http (Go std libray)
Fiber (Golang)
Close thoughts ๐ค
Results are as we anticipated - Go-based frameworks are at least 10x faster than Node & Python-based frameworks. One thing surprised us and possible areas for more research -
In our local testing, Gin has always performed faster than Net/HTTP (Golang). However, in this test, it has scored lower. The source code for this service and the Kubernetes config can be found here and here respectively.
Let us know in the comments if you found a better way to do these tests.
Your feedback ๐ and support๐ค means a lot, do share some love๐ฅฐ by sharing our posts on social media and subscribe to our newsletter! Until next time! ๐๐
References
This article was originally published on GoChronicles.com by the same author and has been repost with permission.
Top comments (0)