DEV Community

Sam Ternent
Sam Ternent

Posted on • Updated on


JSON compression in the browser, with gzip and the Compression Streams API.

... just not yet in Firefox


When building web apps, we sometimes find ourselves with a need to persist large JSON objects. Either directly to the browser using Web Storage, or externally through the File or Fetch APIs.

This problem lead me to ask the question: Is there a better solution to storing raw or encoded JSON? Can we achieve some level of compression?

As it turns out, the answer is Yes and it's surprisingly simple with the Compression Streams API.

This example was created for the specific needs of Web Storage, File and Solid Pod persistence, for a scaling representation of JSON application state.


To set the scene a little, the project I am working on doesn't communicate with a server. Application data is structured transactionally in a Merkle Tree (git, blockchain). The tree structure naturally grows with each interaction with the application and along with it, its memory and storage footprint.

To keep an eye on this growth, I built a small indicator in the control bar to visualise the size. I found as the JSON object grew, the data started to reach an extent I wasn't completely comfortable with, so I decided to take a look and see if I could do better.

943kb raw JSON

This brought me to the Compression Streams API. A browser API that enables gzip compression on a ReadableStream, directly in the browser.

TL;DR After enabling gzip compression on my JSON data, I saw a size reduction of 98%, from 943kb to 10kb.

10kb compressed JSON

This wasn't a scientific test and the data didn't scale how it would organically in the app, but it serves as a fair benchmark that I will get the results I want from this API.


As the Compression Streams API is part of the Streams API, we first need to transform our JSON data object to a ReadableStream.

A Blob can be constructed from text, and JSON can be serialized to text, so to start we can create a Blob from our JSON and get our stream from there.

// Convert JSON to Stream
const stream = new Blob([JSON.stringify(data)], {
    type: 'application/json',
Enter fullscreen mode Exit fullscreen mode

Now we have our ReadableStream, we can use the ReadableStream.pipeThrough method to pipe our data through the gzip CompressionStream transform.

// gzip stream
const compressedReadableStream = stream.pipeThrough(
    new CompressionStream("gzip")
Enter fullscreen mode Exit fullscreen mode

Then to get this compressed stream back to our main code, we can create a new Response and handle the data like any other we would receive from the Fetch API.

// create Response
const compressedResponse = 
  await new Response(compressedReadableStream);
Enter fullscreen mode Exit fullscreen mode

So what can we do with this data now it's compressed?

Well, quite a lot actually.

We could base64 encode it to a string and then store it in localStorage.

// Get response Blob
const blob = await compressedResponsed.blob();
// Get the ArrayBuffer
const buffer = await blob.arrayBuffer();

// convert ArrayBuffer to base64 encoded string
const compressedBase64 = btoa(
  String.fromCharCode( Uint8Array(buffer)

// Set in localStorage
localStorage.setItem('compressedData', compressedBase64);
Enter fullscreen mode Exit fullscreen mode

We could save it as a File.

// Get response Blob
const blob = await compressedResponse.blob();

// Create a programmatic download link
const elem = window.document.createElement("a");
elem.href = window.URL.createObjectURL(blob); = '';
Enter fullscreen mode Exit fullscreen mode

Or we could take more traditional approach, sending through HTTPS and the Fetch API.

// Get response Blob
const blob = await compressedResponse.blob();

await fetch(url, {
  method: 'POST',
  body: blob,
  headers: {
    'Content-Encoding': 'gzip',
Enter fullscreen mode Exit fullscreen mode

And I'm sure there are many other techniques that can be used with our newly compressed JSON object.

Pretty cool, right? But how do we get our data back into a readable form in the browser?

To decompress, we first want to get our compressed data back into a ReadableStream.

Either directly from our Blob (or file/fetch Response)

const stream = compressedResponse.blob().stream();
Enter fullscreen mode Exit fullscreen mode

Or decode back from base64.

// base64 encoding to Blob
const stream = new Blob([b64decode(compressedBase64)], {
  type: "application/json",
Enter fullscreen mode Exit fullscreen mode

Then again, this time with our compressed ReadableStream, we go back through the ReadableStream.pipeThrough method, piping our data through the gzip DecompressionStream.

const compressedReadableStream = stream.pipeThrough(
  new DecompressionStream("gzip")
Enter fullscreen mode Exit fullscreen mode

We then go back into a Response, take the Blob and call the Blob.text() method to get the content back in string format.

  const resp = await new Response(compressedReadableStream);
  const blob = await resp.blob();
Enter fullscreen mode Exit fullscreen mode

All to do now is parse this string back to JSON and we have our decompressed object, back in Javascript and ready to use.

const data = JSON.parse(await blob.text());
Enter fullscreen mode Exit fullscreen mode

Demo and Results

I've put together a small CodeSandbox that takes a few online JSON sources and runs them through the gzip CompressionStream transform described in the solution above.

Oh, and yes. ❤️ VueJS.


Firstly, I had a blast writing this piece and exploring the Compression Streams API. It's simple to use and feels like a really nice addition to the Javascript ecosystem.

In terms of how useful the API itself is, it has solved a real problem for me in being able to reduce my persistent data footprint.

I have already built this into a small feature in my app, but I do plan to integrate it deeper and bake it into the core functionality soon. Which should be another interesting problem to solve as the app has integrated client-side encryption using Age (rage (rage-wasm)). But that's for another day...

It's a little disappointing that this API isn't fully supported yet, at only 72.53% support coverage, according to CanIUse. But I still think it's very worthwhile. We can implement this to fail gracefully where the Compression Streams API is not supported and forgo the performance boost, so that's not a deal breaker for me.

I'm not sure how often the Compression Streams API will be a tool I reach for, but I'm glad to have learnt it and I'm sure this won't be the last time I use it, or at the very least consider it.

Top comments (1)

samternent profile image
Sam Ternent

I've now started playing around with this on