DEV Community

Pavel Rossinsky
Pavel Rossinsky

Posted on

How Finding the Right Compression Level Can Speed Up your Website

While debugging performance problems of a Symfony-based project, I noticed that certain parts of the execution stack consume a lot of CPU time. A minute later, the suspect was found. It was gzdelfate($serializedObject, 9).

The function is a part of the zlib library, most commonly used in PHP projects when you want to compress/decompress a string, for example, before sending a cache entry to Redis.

First thoughts, the compression level is set to 9, which is a maximal value, and should be lowered to reduce the consumption of CPU time. But how to find a sweet spot? Googling didn't answer my question. So I decided to write a script that would benchmark compression/decompression with different levels.

Benchmark

The following results are obtained with the script benchmarking gzdeflate and gzinflate functions of the php-zlib extension. The benchmark has been launched with PHP 7.4.33 on Amazon EC2 c5.xlarge instance. C5 instances are built on 3.0 GHz Intel Xeon Scalable (Skylake) processors, and have the potential to run at speeds up to 3.5 Ghz using Intel Turbo Boost Technology.
Benchmarks for other zlib compression functions gzencode, and gzcompress are no different from gzdeflate.

Compression level Compression Speed, MB/s Decompression Speed, MB/s Ratio Space Saving, % Ratio / Time
1 137.16 64.41 7.4 86.6 2246.223
2 131.49 63.84 7.8 87.1 2246.826
3 115.43 60.84 8.0 87.5 2039.500
4 90.59 59.36 8.8 88.7 1764.139
5 79.66 58.89 9.5 89.4 1661.567
6 57.36 58.92 10.0 90.0 1265.741
7 48.05 58.22 10.1 90.1 1069.743
8 29.67 57.38 10.2 90.2 669.068
9 27.99 56.54 10.2 90.2 631.573

Conclusion

Running hundreds of tests with different file sizes and content types (e.g. serialized objects, html, plain text) has shown that high compression levels do not make a big difference in Space Saving but are extremely expensive in terms of CPU time. For example, compression of a 2.4 MB large serialized object with compression level 9 takes 69.90 ms with a compressed file size of 212 kB. Level 6 (default value) is 37.23 ms with a compressed file size of 217 kB. Compression with level 2 will take 17.59 ms with a compressed file size of 228 kB. Let's assume that the cache is stored in Redis, and we want to take into account the time of transferring an additional amount of information over the network. Transferring of extra 16 kb (228 - 212) through a network with 10 Gbps bandwidth will take less than 1 ms.
More detailed information you can find here.

This also explains why Nginx uses level 1 by default, favoring higher speeds over file size savings.

I defined the golden mean as ratio of the compression ratio to the time spent on compression (last column). The higher the value, the better. But it depends on the task at hand. If your application stores terabytes of compressed data, then a compression level of 9 may be appropriate. On the other hand, if you're aiming for high performance, levels 2 and 3 will do.

The result of this analysis was that by changing one digit in the code, the site became 16% faster than before. Average CPU usage decreased by 58%.

If this post is useful to someone, I could write a second part comparing modern implementations of compression algorithms, such as zstd, brotli, snappy, and lz4.

Top comments (0)