loading...
Cover image for Performance analysis of gRPC and Protobuf

Performance analysis of gRPC and Protobuf

samsadsajid profile image SajidSamsad ・2 min read

So... I've come to know about gRPC and Protobuf lately and was curious about them. I googled and went on reading blogs and articles and watching talks on Youtube about what people think about gRPC and Protobuf and the impacts on them in the micro-service world. I found out these two things are amusing.

We basically use the REST pattern for micro-services to talk to each other which uses http/1.0. gRPC, on the other hand, uses http/2.0. It brings some nice features like compressing headers, of course, the payload. And with gRPC, a remote call can be placed to a separate micro-service. With the help of Protobuf and gRPC, it's easier to talk to different micro-services, and during this, the payload size decreases because it's compressed. As a result, it takes less bandwidth of the servers. Which without any doubt is a great improvement.

In REST, it's basically a request-response model. gRPC, on the contrary, takes full advantage of http/2.0 and gives streaming features like client streaming, server streaming, bi-directional streaming (both client-server streaming).

And there are many cool features in gRPC and to test it really performs better than JSON based REST, I did a POC. In this Github repo, you can find the stats of the POC. And I first hand observed that compared to JSON based REST, gRPC with Protobuf performs better.

To conduct the POC, I have two services, namely order service, and payment service. In v1 the order service and payment service talks between them with gRPC and Protobuf. And I tried to load test on them and tried to figure out the average request/sec and the average read size. The result was great. 1000K requests in 329.12s, 318 MB read.

Then I created v2 of the system and this time I made the services talk to each other with JSON-based REST and load test them. This time 976796 2xx responses, 7465 non-2xx responses, 984k requests in 2449.86s, 312 MB read, 16k timeouts.

I was amused to see that gRPC based system manage to handle all the 1000K requests without any time out. Though it may seem like the "read" is almost similar in terms of MB in both versions. Yes, it's basically because my payload was not that much large, a bit small to be honest. But when the payload size increases, the difference can be observed and in those cases, gRPC services save lots of bandwidth in the servers/pods.

The Github repository can be found here:
gRPC + Protobuf POC

Let me know what you think about this and if you have observed the same or interested in talking stuff about gRPC and Protobuf.

Discussion

pic
Editor guide