DEV Community

Hugh Haworth
Hugh Haworth

Posted on

HTTP caching is a superpower

Original post

What do people mean when they say “use the platform” or “web fundamentals”?

The web is well designed: It’s reliable, it’s consistent, and it’s open. Some sites built on it aren’t so great. But we keep coming back anyway. I’d maintain that the original design of the web is, and always has been, pretty damn good. If we have a deeper look into its basic building blocks, we can improve our websites.

The web got started with a few fundamental features: Hypertext Markup Language, the Uniform Resource Locator, and what we’ll be looking at here: the Hypertext Transfer Protocol. These features give web browsers the ability to fetch and present data. But they can also instruct the browser on how to perform these functions. Why should you care about these specs from the ‘90s? Here's why:

We can continue to improve experiences using features the web has provided for decades.

Let’s think about how we become web developers. Usually it’s something like this:

  • We start as consumers of the web. We learn how to use browsers, including the “back” button and the hyperlink. We learn what URLs are by typing them into the address bar.
  • One day we decide we want to learn how to make websites. The roadmap usually tells you to start with authoring with HTML, then styling with CSS, then building interactivity with JavaScript, (and/or producing dynamic documents with server-side languages).
  • We end up pretty far ahead before looking at a very basic part of the stack in any meaningful way: the humble hypertext transfer protocol.

HTTP 1.1

I want to take us back to 1997. Candle in the Wind is ringing in your radio. You’re greiving for the Peoples’ Princess. You’re wearing low-rise baggy jeans. Bill Clinton is starting his second term. Apple is a computer company that seems to have lost the war against Microsoft. A mobile phone is a suitcase with a phone attached to it.

This was the year the RFC for HTTP 1.1 was officially released. Why was it released? It tells you in the purpose section:

“The first version of HTTP, referred to as HTTP/0.9, was a simple protocol for raw data transfer across the Internet. HTTP/1.0, as defined by RFC 1945 [6], improved the protocol by allowing messages to be in the format of MIME-like messages, containing metainformation about the data transferred and modifiers on the request/response semantics. However, HTTP/1.0 does not sufficiently take into consideration the effects of hierarchical proxies, caching, the need for persistent connections, and virtual hosts.”

While Britain was handing keys of Hong Kong government offices to China, a bunch of developers were working hard to improve HTTP: The very protocol Facebook uses today to send you misinformation and bizarre craft videos.

There was a whole lot of good stuff that came from the HTTP 1.1 proposal. We got persistent TCP connections. This meant less work handshaking between computers. We got PUT , DELETE , TRACE , and OPTIONS for request methods. We got the Content-Encoding header so we could compress the data over the wire. We got the ability to have one IP address serve multiple domains.

And on page 100 of the proposal you can find we got a new header called “cache-control”.

The Cache Control Header

This HTTP header will make your site faster. It'll improve user experience. It’ll give you better SEO. It's been doing all this since before 9/11 and people don't talk about it enough. You don’t need any specific framework or language to use it. It’s not “hot”. It is an underrated building block of the web platform which is probably offsetting the carbon footprint of Ireland (an estimate I’ve made based on nothing).

How can we activate this superpower?

First, you’ll some kind of access to the server-side of your site and a way to specify the HTTP headers of what your application sends. If you’re running a PHP/Ruby/ASP.NET/Django site this shouldn’t be too hard. For each of these you’ll need to find out how to specify content headers in your HTTP response.

Here’s some links for how with each language/framework:

If you’re going for a JAMstack/static site approach it might be a little more difficult.

I could attempt to find out how to set an HTTP header in AWS but that might take a lifetime. I expect developer-friendly companies like Netlify, Vercel, and Cloudflare provide easy ways to set these headers. I’m a masochist, so I host my static site on an Azure CDN and I found the way to set headers is using their Rules Engine.

Also, if you’re a quote unquote “real” programmer - with a grey ponytail, history of taking LSD in the ‘70s and a loathing for writing in any language other than C - you can configure HTTP headers with Apache, or with Nginx.

What does HTTP caching do?

HTTP caching allows us to stop browsers re-fetching information they already have.

This is why server-rendered websites will usually load faster on the second page than the first. When I was learning Laravel my sites always took about two seconds: On the first page, on the second page, the third. Always. I was always re-fetching CSS, images, other assets. Every time I clicked on a new page my browser would ask the server for everything all over again. This includes fonts, where I can’t imagine any case where you’d want the same URL to ever return a different font.

How did other Laravel developers get their pages to load so fast on the second run? How did they prevent the page re-fetching everything? Did they always use turbolinks to fetch each page using JavaScript and rerender? Were they always pulling in Vue and fetching JSON data?

The (far simpler) answer was HTTP caching. You can see it in action if you head into your browser’s developer tools, jump into the network tab and refresh a page. Under “Size” you’ll usually see a bunch of files which say “disc cache” or “memory cache”. These files are the beneficiaries of somebody, somewhere setting a Cache-Control header.

You can see the amount of data they’ve saved going across the wire by keeping your developer tools open and doing a hard refresh (Control + F5 on Windows, Command + F5 on Mac). You might have even used a hard refresh before, but not known that this is what a hard refresh does: It’s a refresh which ignores the browser cache.

You don’t have to tell the browser to do this using JavaScript. This logic is baked into the platform.

What should we cache?

We first need to decide what to cache.

We don’t want any dynamic content to be cached, otherwise when we refresh the page or click on a link, we might get stale data.

For example, let's say where on a neopets forum. We write a comment. Something like: “The most beautiful neopets item is the fire and ice blade!”. In a traditional server-rendered forum we’d get redirected back to the original post. And we’d want the HTML page to be re-fetched from the server. Otherwise our contribution to the conversation won’t be visible to us - the browser would have shown us the cached file instead of re-fetching the updated page. There is a way we can cache the file and check with the server whether things have changed with something called an Etag - more on this later.

However, there will be many things on your neopets forum post which don’t need to change when your browser makes a new request. The styles won’t need to change and can be cached. This is one of the original arguments for separating CSS files from HTML. The fonts will stay the same. The images will usually stay the same. You might have some JSON data that you can trust you’ll never need to change - for example I built a phonetic alphabet quiz app and I can trust that the phonetic alphabet JSON won’t change. If any of these do need to be updated, you also do have the option of serving them from a different URL.

A more controversial issue might be your scripts. If you find a terrible security floor in your JavaScript bundle, you won’t want your user’s browser to touch it with a ten-foot pole. If dangerous scripts get cached, your user will be stuck on the other side of the world with an evil gremlin in their browser. For this reason, if you are caching JavaScript, it’s a good idea to version your bundles to create new URLs for updates. You can set up build tools to do this, and there will be a thousand Medium articles to show you how.

Essentially, we want to cache an asset if we can comfortably say we’re not going to need to update it. and if we do need to update it, we can point to a different URL without affecting the user experience.

How do we use HTTP caching?

The number one way to use HTTP caching is to set the Cache-Control header to a max-age value. Here you can define (in seconds) how long you want the browser to keep using a cached version of the resource.

If head to the Settlers of Catan website, we know we’re about to see some medieval fonts. A quick dive into the source and we can see they’re using Minion Pro. Nice. Times New Roman of the sophisticated. On the network tab, after reloading the *checks notes * 9MB of assets?! We can see Minion Pro is being fetched from the following URL:

https://use.typekit.net/af/ea8d85/0000000000000000000151d1/27/l?subset_id=2&fvd=n7&v=3

If we click into the headers of the response, we see:

cache-control: ****public, max-age=31536000

60 seconds x 60 minutes x 24 hours times x 365 days = 31536000, so the font host is telling our browser to cache the font for a full year.

We know what the max-age means. What about the public? Well, here’s where we find out that your browser isn’t the only thing that caches the request. When we make HTTP requests, they will usually pass through proxy servers - servers inbetween the client and the server.

If we set cache-control to public, these proxy servers will also cache our requests. Even if the browser doesn’t find the file in its cache, the proxy server may prevent a request from getting all the way to our server. This saves our server compute-time and effort. We may not want proxies to store private information, so we have the option to set cache-control to private instead, which will only allow the browser to cache the response.

What other instructions can we give using the cache-control header?

If we don’t want the resource to be cached, we have the option of using the cache-control:no-store option. This will make sure that CDNs, proxies and browsers don’t store something we don’t want them to. Good for anything that we know will change from request to request.

This is not to be confused with cache-control:no-cache which doesn’t actually stop the browser from caching. Instead, this tells the browser to check every time with the server whether the resource has changed, and only if it hasn’t changed will it use the cached version. This begs the question: How does the server tell the browser whether the resource has changed?

This brings us to our next topic:

Etags

Etags are still another way to stop a resource from needing to be downloaded.

An Etag is an arbitrary string which the server can choose to provide with a resource. It will usually be a hash generated from the body of a response (with something like MD5) or instead generated from the last-modified-time of a file. It’s up to the server how this is done.

The browser will store the Etag when it first downloads a resource. Later, if we refresh our website and the following requirements are met:

  • browser has the file in its cache and either
  • the max-age has expired or
  • the cache-control from a previous request was set to no-cache

Then the browser will send a request to ask for the resource again with the previously downloaded Etag and an If-None-Match header. The server will then generate a response with an Etag. If this new Etag matches what the browser sent, the server won’t send the whole response back. Instead it will sent a 304 response with no body. This is HTTP-speak for saying “The file in your cache is fine” and the entire response will only be about 80 to 100 bytes. A little bit smaller than when Jason Miller writes a DOM library.

This Etag method of caching isn’t as performant as when the browser pulls from its cache without a request - we still have to send an HTTP request over the wire and ask the server to check if anything has changed. The good part about it is that the server has fine-grained control over when to bust the cache when things have changed. So using etag responses, we could actually cache more dynamic content, like HTML files, as long as we’re finding a way to generate a new Etag based on our response body or last-modified-time of a file.

Conclusion

You now have a simple, platform-reliant way of preventing unnecessary requests. You have another tool in your belt to save your users time and money. Also you’ve got a way to save a little carbon from being released into our atmosphere to power a server farm. And you can use this tool with any style of website: static file sites, single page applications, and server rendered applications. It’s a superpower.

But don’t take it from me, here are some smarter people than me who’ve written on this topic:

And if this isn’t enough power for you and you want even more out of your browser cache, you can take it to the next level with Service workers.

Top comments (2)

Collapse
 
xxl4tomxu98 profile image
Tom Xu

What about security concerns?

Collapse
 
elliotclydenz profile image
Hugh Haworth

Good question. I'm sure there are security concerns storing data on client devices and proxy servers. Also I mentioned the concern of insecure js getting cached on the client. Let me do some more research and get back to you!