DEV Community

Cover image for Server Rendering in JavaScript: Why SSR?
Ryan Carniato
Ryan Carniato

Posted on • Updated on

Server Rendering in JavaScript: Why SSR?

Server-Side Rendering is all the talk with the JavaScript framework world right now. There are obvious examples like Vercel's Next.js which made the news with getting $40M in new funding. Next, Nuxt, Gatsby, Sapper have all been really popular the last few years along with the rise of JAMStack which promotes the use of Static Site Generation.

But the thing you probably should be paying attention to is that the frameworks themselves have been investing heavily into this area for the past 2 years. There is a reason why we've been waiting for Suspense in React, or we see blog stories about Island's Architecture. Why Svelte and Vue have been pulling meta-framework type projects under their core's umbrella. This is the thing everyone is chasing after.

So I want to take some time today to fill in the gaps, talk about the underlying technology, and overall paint a better picture of what is going on.

Why Server Rendering?

Why server render at all? For some of you, this might be obvious. But it wasn't for me.

I mean there are plenty of ways to mitigate the initial performance costs of JavaScript. I had even made it my personal mission to show people that a well-tuned client only Single Page App(SPA) could outperform a typical Server Rendered SPA in pretty much every metric (even First Paint). And crawlers now can crawl dynamic JavaScript pages for SEO. So what's the point?

Well even with crawlers now being fully capable to crawl these JavaScript-heavy sites, they do get bumped to a second-tier that takes them longer to be indexed. This might not be a deal-breaker for everyone but it is a consideration. And meta tags rendered on the page are often used for social sharing links. These scrapers are often not as sophisticated, so you only get the tags initially present which would be the same on every page losing the ability to provide more specific content.

But these are not new. So, let's take a look at what I believe are the bigger motivators for the current conversation.

Don't Go Chasing Waterfalls

JavaScript bundle sizes have grown, and grown, and well, grown some more. Not every network connection is made equal. Under slow networks, SSR will be faster to show something to the user on the initial load. So if you need the absolute fastest page load there this no contest.

It all boils down to the fact that nothing happens in the browser until it receives the HTML page back. It is only after starting to receive the HTML that other assets are requested.

For dynamic client JavaScript pages like a SPA or even the dynamic parts of a static generated site, as you might create with a Gatsby or Next, often this means at least 3 cascading round trips before the page is settled.

Alt Text

The thing to note is this isn't only a network bottleneck. Everything here is on the critical path from parsing the various assets, to executing the JavaScript to make the async data request. None of this gets to be parallelized.

Here is the rub. This is further compounded by the desire to keep the bundle size small. Code splitting is incredibly powerful and easy to do on route boundaries, but a naive implementation ends up like this:

Alt Text

Four consecutive round trips! The main bundle doesn't know what page chunk to request until it executes, and it takes loading and executing that chunk before it knows what async data to request.

How does Server Rendering address this?

Knowing the route you are on lets the server render right into the page the assets you will need even if code split. You can add <link rel="modulepreload" /> tags or headers that will start loading your modules before the initial bundle even parses and executes.

Additionally, it can start the async data loading immediately on receiving the request on the server and serialize the data back into the page. So while we can't completely remove the browser waterfalls we can reduce them to 1. However, a naive approach here actually delays the initial response of the HTML page. So it isn't a clean victory.

Alt Text

In fact there is a lot more we can do here that I will cover in a follow-up article.

After Initial Load

This equation completely changes after the first load. Assets can be preloaded/cached with a service worker. JavaScript is even stored as bytecode so there is no parsing cost. Everything except the async data request is static and can already be present in the browser. There are no waterfalls, which is even better than the best case from server rendering.

Alt Text

But invalidating out of date service workers and cached assets can be a whole other sort of issue. Stale while re-validating can go a long way for certain types of applications. Sites that need to be up to date might not opt for this and use caches they have more control over.

So the takeaway on this whole topic of performance/size is that the client alone has many techniques to mitigate most things other than that first load of fresh content. That will always be constrained by the speed of the network. But as our applications scale, without due consideration, it is easy for our SPA performance to degrade and a naive application of best practices only introduces other potential performance bottlenecks.

Server rendering can relieve a couple of the important ones if the initial load is important to our sites and applications.

Modern Tools for Everyone

We need to step back out a bit to put this in perspective. There are a lot more websites than web applications. This has always been the case but the mindshare around modern JavaScript frameworks has changed.

When client JavaScript frameworks were first being developed there was a simple goal in mind. Find a way to do all the things in the browser that needlessly had us going back to the server. We were building ever more complex user interfaces and full-page reloads were just not acceptable in a world where people were getting used to native app experiences.

These tools may have been developed with interactive web applications in mind, but there is a much larger set of potential users to tap into that appear to actively be looking to these frameworks for their simpler sites.

This is a really compelling problem. Especially when you consider that the coordination between Client and Server can be really complicated to do efficiently manually. Whenever something is used outside of its original parameters it takes some special consideration.

JS Frameworks vs Server Frameworks

This struggle isn't limited to JavaScript frameworks. Adding largely dynamic JavaScript to something rendered in Rails or any classic backend has this complexity. It's just JavaScript frameworks see this as a unique opportunity to create a completely isomorphic experience. One where with a single codebase you can create a site. Sort of like the old days, but also not at all like them.

The fundamental thing client-side libraries have been solving is state management. It's the whole reason MVC architectures have not been the right match for the client. Something needs to be maintaining the state. MVC with its singleton controllers is wonderful for stateless things like RESTful APIs but needs special mechanisms to handle the persistence of non-model data. Stateful clients and stateless servers mean reloading the page is not acceptable.

The challenge for server frameworks is even with mechanisms like Hotwire for partial updates, it alone doesn't make the client part of the equation any less complicated. You can ignore it is a thing, and if your needs are meager this can suffice. Otherwise, you end up doing a lot of the same work anyway. This leads to essentially maintaining two applications.

This is why the JavaScript frameworks are uniquely positioned to provide this single universal experience. And why it is so attractive to framework authors.

What's Next?

Well, be prepared to hear about this a lot more. This has been going on for about 2 years now, but these projects are finally starting to emerge to a point people feel comfortable talking about it. This has taken time because it's a fundamental shift. While there are Next's and Nuxt's of the world the core libraries haven't been optimized for these cases.

Short of really eBay's Marko we haven't seen to date the sort of sophistication you'd expect from these sort of solutions. But that is all changing. React Server Components are one example. You better believe Vue, Preact, Svelte, etc... have all been working on their own solutions in this space.

Server rendering in JavaScript is the next big race for these frameworks. But it's still up to you whether you choose to use it.

Top comments (17)

brucou profile image
brucou • Edited

Oh I see. I indeed misunderstood you. You are talking about a common intermediate representation from which you can derive multiple formats fitting given SSR engines. The difficulty would be to express both logic (if, loops, event handling, etc.) and parameterization (props of the component to use a React terminology) in that intermediate representation (otherwise the component would be just HTML which is understood already by all engines) in a way that allows for recombination into the target languages. Is that correct?

I thought about Inertia because it skips the problem by not making any attempts to understand a component meaning. It just passes the component information back to the client to be parsed there by whatever framework the client uses. So instead of running comp(a, b, c) or transpiling comp into a template, it passes comp, a, b, and c forward. It is true that you can't really call that rendering.

ryansolid profile image
Ryan Carniato • Edited

Yeah that's frustrating as the language barrier right off the start is a substantial one. The challenge is components aren't language agnostic. Not really. There are still imperative escape hatches. You are right generally language isn't even a consideration. Coming up with the best ways to manage state or apps etc is still in active development. In some ways, things are still less mature architecturally but it's because of the problems that are being chosen to take on.

We were talking about this a bit with Marko since with the next version we've minimized a lot of the need for extraneous JavaScript and built more declarative pieces into the markup, and we are essentially a compiler. So why not target a different language?

The thing is modern templating isn't just slotting strings it's logic. A true isomorphic experience needs the flexibility and expression of a language. JSX for example is very different than ERB. There is a complete difference in the templating language understanding the underlying semantics of what it is working on. For something like Marko we don't really care what those expressions are and we just pass to Babel so it's conceivable that a different language processor could make that jump. But you can start seeing how that becomes an effort into things we don't already have today. With JavaScript we already have this tooling and we have a single target. It's a simple solution.

There will be solutions like this eventually but a couple other pieces need to exist first. So in the meanwhile, it probably is Node or SOL. But it's not from any desire to alienate, it's just where the solutions are today and part of comes with choosing Node in your startup. There are obviously other tradeoffs. I worked at a startup from 2013-2020 that had just moved from Ruby to Node and we had to build almost everything ourselves back then, and we built some garbage and it was a significant time investment. So I can empathize.

sroehrl profile image
neoan • Edited

I am currently working on a project going outside of this paradigm. After seeing the enormous gains the JIT compiler introduced to PHP8, I started to revive an older dream:
Compiling VueJS 3.x components Server-Side in a way where imports are directly rendered to the Dom depending on entry-point and a SPA taking over once the client initialized. The effect is amazingly fast as the Back-end SSR result is exactly like the SPA result including iterations and conditionals offering not only SEO advantages but no "flash" when the SPA takes over, giving the user the illusion of immediate availability (which actually isn't even wrong as e.g. links would resolve as a-tags prior to the client being able to attach the SPA router). Further, a custom store solution provides a markup that enables a uniform API when writing components, regardless of whether the data is already available (as the route prerendered it), or whether a get-request is necessary. This in turn limits requests while providing a "don't need to think about it"-experience while developing.
It's a little premature to share yet, but let me know if you are interested when the time comes.

brucou profile image

You may be interested in how facebook does this (optimization involving server-side rendering) on That is the short presentation, there are inside links to facebook videos that go in depth with graphs similar (actually more detailed) to yours.

ryansolid profile image
Ryan Carniato

My graphic skills leave a lot to be desired. I find I write an article and then delay releasing it considerably trying to create/collect assets. Which becomes a pain the more technical the topic.

This is good content though. I see they call the technique of parallelize fetching "entry points" which I hadn't heard named before. I was going to cover a lot of this in my next article.

brucou profile image
brucou • Edited

That was not a critic to your graphic skills :-) Your graphs are fine. It is just that the folks at facebook have this yearly conference that they have prepared conscenciously, an engineering blog that is professionally produced, so of course it will have a better production quality.

I found the explanations from the videos surprisingly clear as this is a rather technical subject with some knowledge prerequisites.

Here is a link for the video explaining the graphQL-releated optimizations : (conveniently positoined on a graph like yours :-)

brucou profile image
jwp profile image
John Peters • Edited

Love your in-depth articles Ryan.

Yes the Isomorphic buzz has become more predominant. In addition, the collapse of server side MVC architecture was hastened by client side frameworks like React and Angular. Now we are seeing a resurgence in server side rendering which the .NET stack has been doing for over 20 years. This makes me think of the commercial featuring the song Round and Round. Good architecture never dies it just continually improves.

Typescript and .NET backend
In considering Isomorphic architecture, Typescript and C# are so similar that there is almost no difference other than learning the libraries. However, the WebAPI portion of .NET is lean and mean, easy to learn with a plenty of Good documentation and huge Stack Overflow community.

As for server side MVC, yes it is a stateless protocol but that doesn't mean state is not a part of the equation. A concept known as strong type binding allows any state to be easily 'synced' between server and client on each request cycle without any baggage other than the json data inbound submission. Client side state, still a major client side concern has enough horsepower these day to keep track of everything it needs.

An interesting thing happened when ASP.NET Core 3.2, they changed the default set up to favor WebAPI over MVC. MVC is still supported, but there never was that much difference (other than the controller) between the two. I believe this is a hint, that Cloud integration is more important these days and SOA is still king.

I've haven't had enough courage to take the JavaScript SSR plunge even though true Isomorphism is really great. Then again I have to laugh a bit because SSR was around in .NET for 20 years, and now the JavaScript community is discovering it's niceties.

What's next? WASM?

ryansolid profile image
Ryan Carniato

ASP.NET was where I spent my first 6 years as a professional web developer (2005-2011). It felt natural at first because I was big into C# and the Microsoft stack in general since the late 90s. However, it left a very negative effect on how I view this stuff, so I'm glad to hear that things have improved. I spent years trying to keep that in the rearview so to speak. Part of why I worked hard to try to prove client-side alone could outperform SSR. I wrote arguably the fastest client JS framework and worked on figuring out best patterns here.

Part of the problem was the culture around it at the time. We were writing C# code to basically avoid writing JavaScript, not because it was the best thing to be doing. I spent the last 3 years of that job doing mostly JavaScript progressive enhancement on server-rendered pages. Partially because other devs didn't want to touch it like it was dirty. I used it as an opportunity to get up to date with JavaScript ecosystem which helped immensely in the following decade.

At least at the time the server state solutions involved a lot of data serialization going back and forth and brought this immense wait. I understand Hotwire isn't .NET UpdatePanels etc... but even the approaches to partial updates were very involved. I have to imagine things are much better now a dozen or so years later.

Although I do understand the dilemma. There is another path other than JS isomorphism which is WASM which could achieve similar things. And it is awesome that people are working on it. There just is monumental work to make that truly comparable. Both sides of the equation have tradeoffs and once you cross over you need to be concerned with a whole new number of considerations. This makes truly hybrid solutions difficult and where some of my initial skepticism around React Server Components come from. The obstacles on the JavaScript side seem mostly conceptual/architectural whereas the obstacles on the WASM side initially are implementation. Missing capability, concerns around size, performance of implementation. The former has the ability of the whole JavaScript community to look and work on solutions, where the latter puts a lot of onus on the browser vendors and each language vendor. It will get there, but if the experience with things like Web Components are any indicator it will probably take some time.

In terms of taking the plunge in JavaScript. I'd probably wait still. The prospect still terrifies me a bit the more I understand how the "current" solutions work. We are changing that rapidly right now, but with very few exceptions most JavaScript frameworks are only adequate here. I think if someone with deep .NET knowledge or Rails came into the JavaScript SSR (not class templated MVC express etc..) they'd be like what is going on? It's not like the problem isn't complex from either side of it, it's just we are moving out of the mechanical stage into presenting smooth solutions. Things aren't as well packaged up outside of MetaFrameworks which until recently have been 2nd class citizens. But in the next 12 months I expect this conversation to go completely differently.

aminmansuri profile image

After 20 years of frameworks.. I'm suffering from framework burnout.

I can't help to conclude that while there have been a few benefits to some of the newer frameworks, for the most part we just keep reinventing the wheel over and over again.

The industry should really settle. I don't think this constant churn is good anymore.

ryansolid profile image
Ryan Carniato

I feel like we have a moving target so it shouldn't be unexpected that these things evolve. The problem is I think one side you have simple websites and the other side web applications trying to mimic native experience. The easiest solution would be to separate them and the projects on the website side can continue to use whatever they have been for a decade plus. And honestly, when the web application side was being developed that was the thinking.

But it's also equally strange for people why these are different things. You can embrace there is a divide, but there seems to be a growing desire to have a universal solution. That has us entering another phase of this thing. It isn't the first time we've tried isomorphic, but it's the first time we've attacked it from a client as the first-class citizen rather than the server. To me, that is a big distinction. If this doesn't pan out though then I guess what we were doing before is fine. But if it does...

This to me has all been about state management. It's that thing that brings complexity. Sure business logic can be complex and messy but you can abstract it as a series of controlled transactions. User/UI state just push around and no one really wanted to be holding the bag when the music stopped. Client Frameworks embraced it and took on being stateful so everything else could not be. In this world, unlike when we tried this in the mid 2000s, we understand how the rest of the pieces should work we just need to bridge the gap.

Thread Thread
aminmansuri profile image
hidden_dude • Edited

We've done client first, then server, then client again, then server again.. this is an on going zigzag.. not even unique to the web.

I think it's really about people vying to get control of developers and developers following the latest fads.

Web frameworks are like living in Groundhog Day.. we've seen this over and over again. And the new frameworks don't learn from the lessons of the old.

rahxuls profile image

Loved this article. Very deep.

brucou profile image
brucou • Edited

There is Inertia which tries to help to alleviate the problem you describe. Cf.

The idea is to switch freely front-end and back-end, by switching adapters. By default you can have Svelte/Vue/React front-ends together with Laravel/Ruby backends. Then other community-based adapters also exist for other platforms.