In part 2, I glossed over a lot when I wrote…
I decided this called for an MPA. (aka a traditional web app. Site. Thang. Not-SPA. Whatever.)
Okay, but why did I decide that? To demo the fastest possible Kroger.com, I should consider all options — why not a Single-Page App?
Alright, let’s try a SPA
While React was ≈2× too big, surely I could fit a SPA of some kind in my 20kB budget. I’ve seen it done: the PROXX game’s initial load fits in 20kB, and it’s smooth on wimpier phones than the Poblano.
So, rough estimate off the top of my head…
- The HTML is little else than
<script>
s, so its size is negligible. - The remaining 5.4kB is plenty for some hand-tuned CSS.
- This total is pessimistic: Bundlephobia’s estimate notes “Actual sizes might be smaller if only parts of the package are used or if packages share common dependencies.”
Seems doable.
…But is that really all the code a SPA needs?
Some code I knew I’d need eventually:
- Translation between UI elements and API request/response formats
- Updates in the
<head>
- Instead of fullsize image links, a lightbox script to avoid unloading the SPA
- Checking and reloading outdated SPA versions
- Code splitting to fit in the 20kB first load (my components estimate was only for the homepage), which means dynamic resource-loading code
- Reimplementing
beforeunload
warnings for unsaved user input - Analytics need extra code for SPAs. Consider two particularly relevant to our site:
Buuuuuut I don’t have concrete numbers to back these up — I abandoned the SPA approach before grappling with them.
Why?
It’s not always about the performance
I didn’t want to demo a toy site that was fast only because it ignored the responsibilities of the real site. To me, those responsibilities (of grocery ecommerce) are…
- 🛡 Security even over accessibility
- Downgrading HTTPS ciphers for old browsers isn’t worth letting credit cards be stolen.
- You must protect data customers trust you with before you can use it.
- You must protect data customers trust you with before you can use it.
- ♿ Accessibility even over speed
- A fast site for only the able creates inequality, but a slower site for everyone creates equality.
- Note that speed doubles as accessibility.
- Note that speed doubles as accessibility.
- 🏎️ Speed even over slickness
-
Delight requires showing up on time.
- Users think fast sites are more easy-to-use, well-designed, trustworthy, and pleasant.
Accordingly, I refused to compromise on security or accessibility. A speedup was worthless to me if it conflicted with those two.
Security
MPAs and SPAs share most security concerns; they both care about XSS/CSRF/other alphabet soups, and ultimately some code performs the defenses. The interesting differences are where that code lives and what that means for ongoing maintenance.
(Unless the stateless ideal of SPA endpoints leads to something like JWT, in which case now you have worse problems.)
Where security code lives
Take CSRF protection: in MPAs, it manifests in-browser as <input type=hidden>
in <form>
s and/or samesite=strict
cookies; a small overhead added to the normal HTTP lifecycle of a website.
SPAs have that same overhead1, and also…
- JS to get and refresh anti-CSRF tokens
- JS to check which requests need those tokens, then attach them
- JS to handle problems in protected responses; to repair or surface them to the user somehow
Repeat for authentication, escaping, session revocation, and all the other titchy bits of a robust, secure app.
Additionally, the deeper and more useful your security, the more the SPA approach penalizes all users. That rare-but-crucial warning to immediately contact support? It (or the code to dynamically load it) can either live in everyone’s SPA bundle, or only burden an MPA’s HTML when that stress-case happens.
What that means for security maintenance
With multiple exposed APIs, you must pentest/fuzz/monitor/etc. each of them. In theory that’s no different than the same for each POST
able URL in an MPA, but…
Sure, 8 out of 9 teams jumped on updating that vulnerable input-parsing library. Unfortunately, Team 9’s senior dev was out this week and the juniors struggled with a dependency conflict and now an Icelandic teenage hacker ring threatens to release the records of anyone who bought laxatives and cake mix together.2
This problem is usually tackled with a unified gateway, which can inflict SPAs with request chains and more script kB to contact it. But for MPAs, they already are that unified gateway.
Accessibility
Unlike security, SPAs have accessibility problems exclusive to them. (Ditto any client-side routing, like Hotwire’s Turbo Drive.)
I’d need code to restore standard page navigation accessibility:
- Remember and restore scroll positions across navigations.
- Focus last-used element on back navigations (accounting for
autofocus
,tabindex
, JS-driven.focus()
, other complications…) - Focus a page-representative element on forward navigation, even if it’s not normally focusable. Which is filled with gotchas.
And do all that while correctly handling the other hard parts of client-side routing:
- Back/forward buttons
- Timeouts, retries, and error handling
- User cancellations and double-clicks
- History API footguns — see its proposed replacement’s reasoning
- Only use SPA navigation if a link is same-origin, not an in-page
#fragment
, the sametarget
, needs authentication… - Check for Ctrl/⌘/Alt/Shift/right/middle-click, keyboard shortcuts to open in new tabs/windows, or non-standard shortcuts configured by assistive tech (good luck with that last one)
- Reimplement browsers’ load/error UIs, and yours can’t be as good: they can report proxy misconfiguration, DNS failure, what part of the network stack they’re waiting on, etc.
In theory, community libraries should help avoid these problems…
The majority of routers for React, and other SPA frameworks, do this out of the box. This has been a solved problem for half a decade at least. A website has to go out of its way to mess this up.
We use those libraries at work, and let me tell you: we still accidentally mess it up all the time. I asked a maintainer of a popular React router for his take:
We can tell you when and where in the UI you should focus something, but we won’t be focusing anything for you. You also have to consider scroll positions not getting messed up, which is very app-specific and depends on the screen size, UI, padding around the header, etc. It’s incredibly difficult to abstract and generalize.
Even worse, some SPA accessibility problems are currently impossible to fix:
For example, screen readers can produce a summary of a new page when it’s loaded, however it’s not possible to trigger that with JavaScript.
Also, remember how speed doubles as accessibility?
For accessibility as well as performance, you should limit costly lookups and operations all the time, but especially when a page is loading.
Emphasis mine — if you should avoid heavy processing during page load, then SPAs have an obvious disadvantage.
Let’s say I do all that, though. Sure, it sounds difficult and error-prone, but theoretically it can be done… by adding client-side JS. And thus we’re back to my original problem.
Sometimes it is about the performance
You probably don’t have my 20kB budget, but 30–50kB budgets are a thing I didn’t invent. And remember: anything added to a page only makes it slower.
Beyond budget constraints, seemingly-small JS downloads have real, measurable costs on cheap devices:
Code | Startup delay |
---|---|
React | ~163ms |
Preact | ~43ms |
Bare event listeners | ~7ms |
Details in Jeremy’s follow-up post. Note these figures are from a Nokia 2, which is more powerful than my target device.
I see you with that “premature optimization” comment
The usual advice for not worrying about front-end frameworks goes like this:
- Popular frameworks power some sites that are fast, so which one doesn’t matter — they all can be fast.
- Once you have a performance problem, your framework provides ways to optimize it.
- Don’t avoid libraries, patterns, or APIs until they cause problems — or that’s how you get premature optimization.
The idea of “premature optimization” has always been more nuanced than that, but this logic seems sound.
However… if you add a library to make future updates faster at the expense of a slower first load… isn’t that already a choice of what to optimize for? By that logic, you should only opt for a SPA once you’ve proven the MPA approach is too slow for your use.
Even 10ms of high memory spikes can cause ZRAM kicking in (suuuuper slow) or even app kills. The amount of JS sent to P99 sites is bad news
ZRAM impact is system wide. Keyboard may not show up quickly because the page used too much.
Having less code can make everything faster.
I had performance devtools open and dogfooded my own code as I made my demo. Each time I tested a heavier JS approach, I had tangible evidence that avoiding it was not prematurely optimizing.
It’s not just bundle size
Beyond their inexorable gravity of client-side JavaScript, SPAs have other non-obvious performance downsides.
In-page updates buffer the Web’s streaming, resulting in JSON that renders slower than HTML for lack of incremental rendering. Even if you don’t intentionally stream, browsers incrementally render bytes streaming in from the network.
Memory leaks are inevitable, but they rarely matter in MPAs. In SPAs, one team’s leak ruins the rest of the session.
JS and
fetch()
have lower network priority than “main resources” (<a>
and<form>
). This even affects how the OS prioritizes your app over other programs.Lastly, and most importantly: server code can be measured, scaled, and optimized until you know you can stop worrying about performance. But clients are unboundedly bad, with decade-old chips and miserly RAM in new phones. The Web’s size and diversity makes client-side “fast enough” impossible to judge. And if your usage statistics come from JS-driven analytics that must download, parse, and upload to record a user… can you be certain of low-end usage?
Fresh loads happen more than you think
The core SPA tradeoff: the first load is slower, but it sets up extra code to make future interactions snappier.
But there are situations where we can’t control when fresh loads happen, making that one-time payment more like compounding debt:
- Page freezing/eviction
- Mobile browsers aggressively background, freeze, and discard pages.
- Switching apps/tabs unloads the first annoyingly often — ask any iOS user.
- Browser/OS updates and crashes
- Many devices auto-update while charging.
- I’m sure you can guess whether SPAs or MPAs crash more often.
- Deeplinks and new tabs
- Users open things in new tabs whether we like it or not.
- Outside links must do a fresh boot, weakening email/ad campaigns, search engine visits, and shared links.
- Multi-device usage really doesn’t help.
- In-app browsers share almost nothing with the default browser cache, which turns what users think are return visits into fresh load→parse→execute.
- Outside links must do a fresh boot, weakening email/ad campaigns, search engine visits, and shared links.
- Intentional page refreshes
- Users frequently refresh upon real/perceived issues, especially during support. The world considers Refresh ↻ the fixit button, and they’re not wrong.
- SPAs often refresh themselves for logins, error-handling, etc.
- A related problem; you know those “This site has updated. Please refresh” messages? They bother the user, invoke a refresh, and replicate a part of native apps nobody likes.
- SPAs often refresh themselves for logins, error-handling, etc.
But when fresh pageloads are fast, you can cheat: who cares about reloading when it’s near-instant?
SPAs are more fragile
While “rebooting” on every navigation can seem wasteful, it might be the best survival mechanism we have for the Web:
These errors come, in large part, from users running odd niche or out-of-date browsers with broken Javascript or DOM implementations, users with buggy browser extensions injecting themselves into your scope, adblockers blocking one of your
<script>
tags, broken browser caches or middleboxes, and other weirder and more exotic failure modes.— Systems that defy detailed understanding § Client-side JavaScript
Given that most bugs are transient3, simply restarting processes back to a state known to be stable when encountering an error can be a surprisingly good strategy.
3 131 out of 132 bugs are transient bugs (they’re non-deterministic and go away when you look at them, and trying again may solve the problem entirely), according to Jim Gray in Why Do Computers Stop and What Can Be Done About It?
We control our servers’ features and known bugs, and can monitor them more thoroughly than browsers.
Client error monitoring must download, run, and upload to be recorded, which is much flakier and adds its own fun drawbacks.
-
Relying on client JS is a hard known-unknown:
- JavaScript isn’t always available and it’s not the user’s fault.
- Why availability matters
-
Dan Abramov’s warning against React roots on
<body>
. - User-Agent Interventions are when browsers intentionally mess with your JS.
Overall, SPAs’ reliance on client JS makes them fail unpredictably at the seams: the places we don’t control, the contexts we didn’t plan for. Enough edge-cases added up are the sum total of humanity.
When a SPA is a good choice
Remember PROXX from earlier? It did fit in 20kB, but it also doesn’t worry about a lot of things:
- Its content is procedurally generated, with only a few simple interactions.
- It has few views, a single URL, and doesn’t load further data from a server.
- It doesn’t care about security: it’s a game without logins or multiplayer.
- If it weren’t accessible3 or broke from client-side fragility… so what? It’s a game. Nobody’s missing out on useful information or services.
PROXX is perfect as a SPA. You could make a Minesweeper clone with <form>
and a server, but it would probably not feel as fun. Games often should maximize fun at the expense of other qualities.
Similarly, the Squoosh SPA makes sense: the overhead of uploading unoptimized images probably outweighs the overhead of expensive client-side processing, plus offline and privacy benefits. But even then, there are many server-side image processors, like ezgif or ImageOptim online, so clearly there’s nuance.
You don’t have to choose extremes! You can quarantine JS-heavy interactivity to individual pages when it makes sense: SPAs can easily embed in an MPA. (The reverse, though… if it’s even possible, it sounds like it’d inherit the weaknesses of both without any of their strengths.)
But if SPAs only bring ☹️, why would they exist?
We’re seeing the pendulum finally swing away from SPAs for everything, and maybe you’ll be in a position someday where you can choose. On the one hand, I’d be delighted to hand you more literature:
On the other hand…
Please beat offline-first ServiceWorker-cached application shells or even static HTML+JS on a local CDN with a cgi page halfway across the globe.
This is true. It’s not enough for me to say “don’t use client-side navigation” — those things are important for any site, whether MPA or SPA:
- Offline-first reliability and speed
- Serving as near end-users as possible
- Not using CGI (it’s 2022! at least use FastCGI)
So, next time: can we get the benefits that SPAs enjoy, without suffering the consequences they extremely don’t enjoy?
-
Some say that instead of CSRF tokens, SPAs may only need to fetch data from subdomains to guarantee an
origin
header. Maybe, but the added DNS lookup has its own performance tax, and more often than you’d think. ↩ -
No, this was not an incident we had. For starters, the teenagers were Luxembourgian. ↩
-
PROXX actually did put a lot of effort into accessibility, and that’s cool. ↩
Top comments (26)
With service workers you can get the same benefits of an MPA but have everything be instant on your device by streaming the HTML from the service worker itself. The main downfall of this is that the service worker isn't mature yet in that I can't load modules on-the-fly with dynamic imports. I created my own importer to do this but it turned into so much work maintaining it all that I just ended up using a library similar to HTMX (my own HTMF).
Apps for the general public I think MPAs make a lot of sense. Apps for internal business I think something like HTMX (or HTMF :-)) can make really good sense (a mix between MPA and SPA).
It really depends on the constraints of the project to determine if a full SPA is needed or a mixture or just an MPA.
I haven't used SPAs a lot but one thing that I don't like is the repaint between pages when I'm doing the MPA/SPA with the service worker when I don't stream the HTML from the service worker. It is much cleaner to do the latter. But, like I said before, it is difficult to do when you don't have a framework built that way already as most people are focused on SPAs rather than a service worker app with streamed HTML for each page.
I got on-the-fly imports working well in Marko, since its Rollup plugin already has logic for server vs. client bundles. (I lied to
@marko/rollup
in just the right way so that it also made out a third, tree-shaken bundle that thought it was for the server, then I transpiled it for the SW environment.)That repaint between pages is interesting if you still have it — Chrome was the last browser to remove that flicker when it shipped Paint Holding in 2019, I thought. When and where are you seeing it?
Oh, and the on-the-fly imports (dynamic imports) I was talking about I was more talking about doing it in a service worker. Which isn't allowed in service workers yet - but I have created my own, but it is a bit of a pain to do :-) compared to just having it work out of the box.
Yeah, I reused the Rollup ecosystem’s code for that, myself. Pretty odd that
import
still isn’t allowed in Service Workers, but at leastimportScripts()
lets us fake it.Yeah, actually, I'm looking for specifically dynamic imports so I only need to load the code is only being used at the moment rather than having to load all the pages code at once. So this:
developer.mozilla.org/en-US/docs/W...
For regular imports they work with the latest Chrome as of earlier this year (if I remember right). But Firefox still doesn't have it.
But this discussion has been great. It's made me think of how I can simplify my offline app more using a truer MPA style coding.
Yes, I'm definitely looking forward to your next post! Thank you for putting all the time and effort into these posts!
Yeah, I guess the repaint issue is because I created my own "spa" framework. Basically, it isn't SPA but kind of is. Since I have separate HTML pages cached with a service worker and then I have all the code on the front end (since I have the app as an offline first app). So, each time I go to a different page it has to load the data on the fly after changing the page. So, I have a list of my weight for the year with comments and such and when I go to that page it takes time to get all that data onto the page since it has to load the JS first before adding the dynamic data. If I did it from the service working the user wouldn't notice all that. I guess that is why SPAs do their own routing so it doesn't seem so janky. I don't really work with SPAs much since I'm more of a back end person but I imagine in a SPA they create the page first before moving over to the new page. Also, like you, I don't like the size of JS on the front end and frameworks seem to be pretty beefy and cause issues for the user - like you were mentioning. That is why I've been pushing for doing everything from the service worker. I was thinking of just loading all the code in the service worker and just streaming the HTML from there.
But there are also issues with service workers. Like, when using a service worker ideally you have the SHA unique characters appended to your file name. But that is hard to do without getting involved in the SPA world. So, tooling isn't really great when doing it this way. But maybe I'll pick it up again and try it again. I haven't been doing as much coding on the side as I have in the past as I've been trying to spend more time with the family before they leave the house :-). But I get some free time here and there.
It’s very interesting that you and I seem to be converging on the same ideals, but from opposite directions. You might like the next post in this series for that especially.
Your post inspired me to rewrite my app in an MPA style with minimal JS. Granted it is an offline app so it is all JS but it is written from a service worker. It really simplifies things. Make a post on a form - no problem, just refresh the page! I still get the occasional white flash on the page when going between pages, which is annoying. But the simplicity is amazing.
I'm excited for your next posts!
github.com/jon49/WeightTracker/tre...
Oh damn! I am going to be reading the heck out of this code — you may have pulled off what I was trying to demonstrate for the next post faster than I could!
lol, Thanks. Yeah, it isn't the first time that I've attempted this. That's why I was able to do it so fast. I've been fascinated about running an app from a service worker ever since I heard Chris Love say that service workers are the death of SPAs.
I have actually created a module loader to load JS on-the-fly from a service worker so you can have a pretty large app. But it was too much work and I just wanted to finish the app. So, I ended up making it with HTMF and Razor Pages (C#).
github.com/jon49/MealPlanner/blob/...
Chris Love's website: love2dev.com/
Love the header image.
I'm mostly convinced that client-side routing is almost always bad news. I was already disillusioned with SSG and currently enjoying Remix a lot. So I'm looking forward to the next installment.
Yeah, I think at this point I like everything about Remix’s technical approach except for React itself. Remix’s principles and fundamentals make a lot of sense to me — but React’s tradeoffs clash with them sometimes.
I'm also conflicted about it. Progressive hydration and streaming SSR with Suspense ease some of the pain, but there are still some drawbacks. On the other hand, since changing the DOM is unavoidable in most apps, I like using a single mental model for building and modifying the DOM, and React is a pretty stable choice. I'm still evaluating new choices that I see.
I agree wholeheartedly. With those priorities, you might as as interested in Marko as I was. It keeps the mental model of a tree of components, but is much more efficient with SSR and doesn’t need a client-side router.
Marko has been on my radar for a long time, but I guess unfamiliarity was a significant barrier for me. There's also SolidJS which has similar claims, for that matter. I guess at some point I'll have to bite the bullet and write a small project in either one to gain some first-hand experience.
Totally agree with all the stuff you'ce written here, but there is one thing that bothers me even more from a user perspective: MPAs that use frontend frameworks for extremely basic stuff. I don't want to wait for the JS to load (on every page load) to see the header etc. That feels like the worst of all worlds without any benefits, and seems not to be as uncommon as it should be.
Another hard thing about MPAs is that I haven't been able find documentation on how to build robust MPAs. That knowledge is hard to find with searches pigeon holing old web pages. So, e.g., how do you prevent a double click? If I remember right from what I've read you need to do a redirect after a form submission and somehow that stops the double click. There are other patterns like that, but where to find them?
Ah, you’ve got two concepts there — which I think reinforces your point that this knowledge isn’t easily accessible.
Redirecting after form submission avoids the “Really resubmit this data?” dialog when hitting Back after a
<form method=post>
, known as the Post/Redirect/Get technique.Preventing double-clicked form submissions without JavaScript doesn’t have a single name: each framework tends to have its own (WordPress reuses
WP_NONCE
, for example). I like calling them “idempotency keys/tokens” after Stripe’s API popularized the term. (Stripe uses an HTTP header, but as you probably suspect it’d be<input type=hidden>
for no-JS functionality, such as thedjango-idempotency-key
module.)Idempotency keys are a good idea even if you have JS briefly disable a submit button after the first click, because it covers network hiccups too. For example, browsers have built-in HTTP retry logic, but if the server’s response was the one that got lost along the way, the browser+network combo could cause duplicate requests.
As for where to find these techniques… I also wish I knew! I had to find them the hard way, almost exactly like you described. Heck, I only learned about idempotency keys last month.
I’m loving these articles, even if some of it is a little over my head on the technical side. I’m blown away at how deeply you know the topics, and the writing is not just clear and communicative, it has style and voice which is rare in technical writing.
Out of curiosity, the references are really thorough. Are you constantly reading and bookmarking things to use day-to-day, or do you search out what you need and research as you write?
You guessed it: I have an ongoing bookmarks folder for this. Sometimes I do search for things when I need a stronger case or a more up-to-date link, though.
Hi Taylor,
This is an excellent post if for no other reason than it's thoroughness--it's a rare gem to see so much thought put into a subject matter with great cross-references. I happen to think you're on the money in many cases, too!
Liked and followed. Excellent deep-dive. I guess this debate started a bit more than a year ago, with the infamous Twitter exchange, then the Jake/Surma video youtube.com/watch?v=ivLhf3hq7eM and the overall abundance of choice in terms of SSR/SSG frameworks (SvelteKit, Elder, Astro).
A general React fatigue - which I'm personally happy to see as it was monopolizing the web/frontend sphere for a while now.
MPA embedded in SPA: It sounds like Hotwired.dev Turbo, formerly known as Turbolinks. It stays on the same page (akin to an "SPA"), but on navigation it sends an AJAX request, receives HTML, and replaces the title and entire body of the existing page, and updates the URL with pushState. The good old PJAX approach.
Check out this video for an unholy union of Astro and Turbo to see exactly what you describe:
youtube.com/watch?v=6mv0_jsWhoE
"I asked a maintainer of a popular React router for his take:" which one?
He requested not to be named, so I’m respecting that