DEV Community πŸ‘©β€πŸ’»πŸ‘¨β€πŸ’»

DEV Community πŸ‘©β€πŸ’»πŸ‘¨β€πŸ’» is a community of 963,503 amazing developers

We're a place where coders share, stay up-to-date and grow their careers.

Create account Log in
Luis Sena
Luis Sena

Posted on • Originally published at luis-sena.Medium on

Achieving Sub-Millisecond Latencies With Redis by Using Better Serializers.

How some simple changes can result in less latency and better memory usage.

Redis Strings are probably the most used (and abused) Redis data structure.

One of their main advantages is that they are binary-safeβ€Šβ€”β€Š This means you can save any type of binary data in Redis.

But as it turns out, most Redis users are serializing objects to JSON strings and storing them inside Redis.

What’s the problem you might ask?

  • JSON serialization/deserialization is incredibly inefficient and costly
  • You end up using more space in storage (which is expensive in Redis since it’s an in-memory database)
  • You increase your overall service latency without any real benefit

Using JSON to store data in Redis will increase your latency and resource usage without bringing any real benefit.

One other β€œsimple” optimization you can use is compression.

This one will depend on each use case since it will be a trade-off between size, latency, and CPU usage.

Algorithms like ZSTD or LZ4 can be used with minimal CPU overhead, resulting in some good storage savings.

The following charts show how much you gain just by switching from JSON to a binary format like MessagePack.

These charts are also including the serialization/deserialization times.

We can also see that we can save some storage/memory by using compression at the expense of some latency.

Using a random β€œJSON” object with different attributes

Using a random β€œJSON” object with different attributes

While the previous charts showed a fairly complex JSON object that LZ4 can handle pretty well (compression ratio wise). When we need to compress arrays of floats, we see that ZSTD has the advantage in the next charts.

Here I ran the benchmarks with different sized arrays.

Using a small array of floats

Using a small array of floats

Using a big array of floats

Using a big array of floats

As you can see, just by switching from JSON to MessagePack, you can reduce your latency by more than 3x without any real disadvantage!

Simple example using python to set/get a Redis String using JSON and MessagePack:

As you can see, it’s as simple as using JSON.

Further Reading

How does this all sound? Is there anything you’d like me to expand on? Let me know your thoughts in the comments section below (and hit the clap if this was useful)!

Stay tuned for the next post. Follow so you won’t miss it!

Top comments (1)

Collapse
 
pedroasad profile image
Pedro Asad

Interesting study! I also find Python's builtin pickle to be a good alternative for (de)serializing from/to Redis in some cases. In one anecdotal case that I have, with an 11KB JSON object, pickle is comparably:

  • Fast (about 13% slower than msgpack),
  • Compact (about 5% larger than msgpack),

But most importantly, it allows you to share native Python objects with other Python processes (think a web application with multiple workers) without requiring the to-from-json round trip, which will inevitably add an expressive overhead once you go past nested dictionaries of lists of dictionaries of whatever.

Update Your DEV Experience Level:

Settings

Go to your customization settings to nudge your home feed to show content more relevant to your developer experience level. πŸ›