This is a story about a lot of things:
- Fitting a Fortune 20 site in 20kB
- Diving into site speed so deep we’ll see fangly fish
- React thwarting my goal of serving users as they are
- Burning out trying to do the right thing
- And by the end, some code I dare you to try.
React/Redux packages used totaled 44.7 kB before any feature code.
Our WebPageTest results spoke for themselves.
This was after investing in Server-Side Rendering (SSR), a performance team, and automated regression testing.
In particular, React SSR was one of those changes that looks faster, but looks can be deceiving. In retrospect, I’m amazed developers get away with considering SSR+rehydration an improvement at all.
Make your code faster… by running it twice!
—how React SSR works, apparently
I used to ask other developers to stop writing slow code.1 Such as…
“Please cut down on the
<div>s, they make our DOM big and slow.”
“Please avoid CSS like
.Component > * + *, it combines with our big DOM into noticeable lag.”
“Please don’t use React for everything, it caps how fast we can be.” (Especially if it renders big DOMs with complex styles…)
Nobody listened. But, honestly, why would they?
This carried on, and it was cool/cool/depressing/cool. But a new design system inflicted enough Tailwind to hurt desktop Time to First Paint by 0.5 seconds, and that was enough to negotiate for a dedicated Web Performance team.
Which went well, until it didn’t. Behold, the industry-standard life of a speed optimization team:
- Success with uncontroversial changes like better build configuration, deduplicating libraries, and deleting dead code
- Auditing other teams’ code and suggesting improvements
- Doing the improvements ourselves after said suggestions never escaped backlogs
- Trying to make the improvements stick with bundle size monitoring, Lighthouse checks in PRs, and other new layers of process
- Hearing wailing and gnashing of teeth about having to obey said layers of process
- Realizing we need to justify why we were annoying everyone else before we were considered a net negative to the bottom line
The thing was, WebPageTest frowning at our speed didn’t translate into bad mobile traffic — in fact, most users were on iPhone.2 From a business perspective, when graphs go up and to the right, who cares if the site could be faster?
|It’s fast enough. You’ve seen those M1 benchmarks, right?
|You mean I have to care about this, too!? We just got done having to care about accessibility!
|I promise we will eventually consolidate on just three tooltip libraries if you let us skip the bundle check
|I should have realized the dark path I was going down when I tried to see if
npm install * worked.
|I love my slow website.
Proving that speed mattered wasn’t enough: we also had to convince people emotionally. To show everyone, god dammit, how much better our site would be if it were fast.
So I decided to make a demo site that reused our APIs, but in a way that was as fast as possible.
Spoiler: surprising myself, I succeeded. And then things got weird. But before I can tell you that story, I have to tell you this story…
HTTP/1.1 204 No Content
This is the fastest web page. You may not like it, but this is what peak performance looks like.
That may seem unhelpful — of course a useful page is slower than literally nothing! — but anything added to a frontend can only slow it down. The further something pushes you from the Web’s natural speed, the more work needed to claw it back.
That said, some leeway is required, or I’d waste time micro-optimizing every little facet. You do want to know when your content, design, or development choices start impacting your users. For everything added, you should balance its benefits with its costs. That’s why performance budgets exist.
But to figure out my budget, I first needed some sort of higher-level goal.
🎯 Be so fast it’s fun on the worst devices and networks our customers use.
- Target device: bestselling phone at a local Kroger
- Hot Pepper’s Poblano VLE5
- $35 ($15 on sale)
- Specs: 1 GB RAM, 8 GB total disk storage, and a 1.1 GHz processor.
- $35 ($15 on sale)
- Target connection: “slow 3G”
- 400kbps bandwidth
- 400ms round-trip time latency
- At the time, what Google urged to test on and what WebPageTest’s “easy” configuration & Lighthouse used
- 400ms round-trip time latency
Unfortunately, connections get worse than the “slow 3G” preset, and one example is cellular data inside said Kroger. Big-box store architectures double as Faraday cages, losing enough packets to sap bandwidth and latency.
Ultimately, I went with “slow 3G” because it balanced the USA’s mostly-faster speeds with the signal interference inside stores. Alex Russell also mentioned “we still see latency like that in rural areas” when I had him fact-check this post.
(These device and connection targets are highly specific to this project: I walked inside stores with a network analyzer, asked the front desk which phone was the most popular, etc. I would not consider them a “normal” baseline.)
Yes, when networks are so bad you must treat them as optional, that’s a job for Service Workers. I will write about special SW sauce (teaser: offline streams, navigation preload cache digests, and the frontier of critical CSS), but even the best service worker is irrelevant for a site’s first load.
(Wait, don’t spotty connections mean you should reach for a Service Worker?)
Yes, when networks are so bad you must treat them as optional, that’s a job for Service Workers.
I will write about special SW sauce (teaser: offline streams, navigation preload cache digests, and the frontier of critical CSS), but even the best service worker is irrelevant for a site’s first load.
Although I knew what specs I was aiming for, I didn’t know what they meant for my budget. Luckily, someone else did.
Google seems to know their way around web performance, but they never officially endorse a specific budget, since it can’t be one-size-fits-all.
But while Google is cagey about an specific budget, Alex Russell — their former chief performance mugwump — isn’t. He’s written vital information showing how much the Web needs to speed up to stay relevant, and this post was exactly what I needed:
Putting it all together, under ideal conditions, our rough budget for critical-path resources (CSS, JS, HTML, and data) at:
- 170KB for sites without much JS
- 130KB for sites built with JS frameworks
(Alex has since updated these numbers, but they were the ones I used at the time. Please read both if you’re at all interested — Alex accounts for those worse-than-usual networks I mentioned, shows his work behind the numbers, and makes no bones about what exactly slows down web pages.)
Unfortunately, the hardware Alex cited clocks 2GHz to the Poblano’s 1.1GHz. That means the budget should lower to 100kB or so, but I couldn’t commit to that. Why?
I can’t publish exact figures, but at the time it was scarcely better. Barring discovery of the anti-kilobyte, I needed to figure out which third-parties had to go. Sure, most of them made $, but I was out to show that dropping them could make $$$.
After lots of rationalizing, I ended with ≈138kB of third-party JS I figured the business wouldn’t let me live without. Like the story of filling a jar with rocks, pebbles, and sand, I figured engineering around those boulders would be easier than starting with a “fast enough” site and having it ruined later.
Some desperate lazy-loading experiments later, I found my code couldn’t exceed 20kB (after compression) to heed Alex’s advice.
20 kilobytes ain’t much.
react-dom are nearly twice that. An obvious alternative is the 4kB Preact, but that wouldn’t help the component code or the Redux disaster — and I still needed HTML and CSS! I had to look beyond the obvious choices.
What does a website truly need? If I answered that, I could omit everything else.
Well, what can’t a website omit, even if you tried?
You can make a real site with only HTML — people did it all the time, before CSS and JS existed.
(Yes, I see you with the Svelte.js shirt in the back. I talk about it in the next post.)
So my plan seemed possible, and apparently profitable enough that Amazon does it. Seemed good enough to try.
Are you sure about that? The way I figured…
- If you inline CSS and generate HTML efficiently, their overhead is negligible compared to the network round-trip.
- Concatenating strings on a server should not be a huge bottleneck. And if it were, how does React SSR justify concatenating those strings twice into both HTML and hydration data?
But don’t take my word for it — we’ll find out how that stacks up next time. In particular, I first need to solve a problem: how do you send a page before all its slow data sources finish?
That does not count as insider information. Any US website with a similar front-end payload will tell you the same. ↩
Those numbers were very loose, conservative estimates. They’re also no longer accurate — they’re much higher now — but they still work as a bare minimum. ↩