DEV Community

Cover image for Exploring Web Rendering: Server Components Architecture
Eric L. Goldstein
Eric L. Goldstein

Posted on • Updated on • Originally published at babbel.com

Exploring Web Rendering: Server Components Architecture

Photo by Alabama Extension on Flickr

Holes. You’re about to read about holes. Like in the ground. What? Server components architecture can be a confusing topic, so we’re going to build on all the knowledge gained in this article series so far until this fundamental shift in application architecture makes sense. We’ve made it to the final entry in this series, so thank you for following along to the end!

Let’s recap what we’ve covered so far. The case for server-first rendering was made since the first article, but each blog post hopefully made an increasingly convincing case for why doing so is essential for good performance. Starting at the beginning with part 1 of the series, isomorphic JavaScript includes code that runs unchanged on both server and client. However, all components on the page require their code to be sent to the browser for hydration purposes. Thus, a non-trivial startup cost exists when initializing the application code to make the server-rendered HTML interactive and is manifested by stutters and missing interactivity until the JavaScript executes to completion; this phenomenon is known as the “uncanny valley”. As outlined in article 2, one way to reduce the startup penalty is to use partial hydration a.k.a. “islands” wherein the amount of code sent to the browser is reduced by only sending interactive component code. Another performance-enhancing option is to delay when hydration occurs; this is known as progressive hydration and was covered in part 3. Near-elimination of JavaScript can be accomplished with streaming HTML – covered in part 4 of this series – by server rendering all static components as-is and sending placeholder content for those in the process of performing an asynchronous action like data fetching. As each async response completes, a new HTML “chunk” is sent back to the browser to update the page contents with almost no JavaScript; this repeats until the page is fully built.

Combining the techniques from those 4 articles and adding some new ideas can lead to a unique application architecture known as server components; how this combination works will be covered after the upcoming explanation of server components mechanics. To be clear, “server components” and “server components architecture” are not the same thing. The former refers to a type of component used in a component tree, specifically the server-only variant. The latter refers to the overall structure and behavior of applications using new frameworks such as React Server Components. Despite the frontend ecosystem’s overwhelming focus on React Server Components, server components architecture is not just part of a new version of React. In fact, it is available in frameworks like Nuxt for Vue.js and SolidStart for SolidJS – documentation coming soon but its Solid Movies demo app using only ~16kB of JavaScript has code available using its experimental islands router as a stand-in for upcoming server components support. Rather, server components should be thought of as an architecture that further shifts the developer mindset from the traditional SPA client-first one started roughly a decade ago to a server-first one with unique abilities. For simplicity and relevance, React Server Components using Next.js 14 will be used for examples throughout the rest of this article.

The server components architecture consists of two component types: (1) its namesake server components and (2) client components. Before this new architecture, single-page applications relied on components that were rendered in the browser by default. Using server components reverses this trend because components are now rendered on the server by default. Additionally, server components are truly server-only meaning that they never render on the client. As a result, numerous benefits are gained related to data fetching, performance, security, and user experience. Data fetching and performance are improved because the rendering server is likely closer to its data source than most end user devices, so making multiple round trips to a database, for example, will occur more quickly between data centers than between a data center and a user’s browser. Also, the code needed to fetch that data remains on the server including any API keys and secret information, so the data architecture remains simpler and bundle sizes smaller. User experience is impacted greatly by performance, so improvements here clearly cause knock-on effects for UX due to less server response delay and less code in the browser as explained above.

Client components, on the other hand, are a bit of a misnomer: they are isomorphic and execute identically on both the server and client with the additional benefit that server components can send props directly to specific client components; this ability to do hybrid rendering with server-client prop exchange at arbitrary points in time is a unique advantage server components architecture has over islands architecture. Additionally, client components are effectively persistent islands so when navigating pages, the “static” parts of the page (outside the islands) are sent as an API response from the server then diff’d with the current client virtual DOM to output the final real DOM; see the figure below for a visualization. Note that React now has the ability to update page content by diffing not only locally-rendered DOM nodes but ones rendered and sent by the server in a special, JSON-like VDOM format as well.

Diagram showing how client components can persist across server-generated page changes
Client components persist across page navigations despite the new page’s HTML coming from the server in the form of VDOM “JSON”

Now that the definition is better understood, why the focus on “holes” at the start of the article? In the component tree, the two types of components are used in mutually exclusive layers, so there is effectively a boundary between client and server components. A component with a “use client” directive as its first line of code means that that component is a client component, but that component is only a boundary if it causes the component type to change; in other words, it is the first child client component of a server component, so a “hole” is created in the component graph to signify that the server must wait for the browser to fill in that part of the component tree. A client component that is a child of another client component does not create a boundary because one was already created by one of its parents. Conversely, a server component that is a child of a client component means a server component boundary was crossed so a “hole” is created in the component graph here as well.

Diagram showing an example of component boundaries
Server components architecture requires boundaries to be crossed when switching component types. Behold the donut “holes”.

An important restriction is that server components cannot be imported into client components. However, server components can be children of client components. Thus, to get around the restrictions, pass server components to client components as props; most commonly a server component is passed as the children prop.

Let’s build a simple example to more clearly show how this architecture works including nesting a server component inside a client component. To make this easier to follow, an open source repository was created to house this demo application for easier tinkering and learning. Imagine a basic blog setup that allows sharing the first paragraph of content on social media. This could all be done in a server component except for the needed feedback from the user about if and when to share. Thus, to use server components as efficiently as possible, we will use the architecture diagrammed below showing a root <BlogPost> server component wrapping a <Share> client component which has server rendered content directly passed to its children prop; notice the similarities to the diagram above as well especially when considering boundaries being crossed and how the client component contents are mostly rendered by a server component.

Diagram showing the component boundaries of the demo application and how even the client component relies on server-rendered content
Notice the 2 boundaries crossed (“holes”) and how the <Share> component is mostly rendered by the server

Examining the source code of both components can further illuminate the behavior and mechanics of React Server Components with Next.js.

First the <BlogPost> server component:

// External Imports
import React from 'react';

// Internal Imports
import { getParagraphs } from '../../utils/content';
import Share from '../Share/Share';

// Component Definition
export default async function BlogPost() {
  const paragraphs = await getParagraphs();

  return (
    <article>
    {paragraphs.map((paragraphText, index) => {
        if (index === 0) {
        return <Share key={paragraphText}>{paragraphText}</Share>
        }
        return <p key={paragraphText}>{paragraphText}</p>;
    })}
    </article>
  );
}
Enter fullscreen mode Exit fullscreen mode

Because BlogPost does not have a “use client” directive on its first line, it is a server component, so all rendering and data fetching occurs on the server. Additionally, React Server Components encourages the use of asynchronous components to promote colocation of rendering and data fetching, so the BlogPost component uses async/await to load paragraph data before rendering. Such a design typically introduces waterfalls into the network graph because the server has to pause during each network fetch. However, Next.js argues that waterfalls are mostly prevented because global fetch() is replaced with a Next-specific version by default so that all requests can be memoized. The getParagraphs() function is meant to mimic a call to a CMS or something similar, so once its data is returned, rendering can begin. The first paragraph is rendered to the <Share> component and all other paragraphs are rendered into basic <p> elements.

Then the <Share> client component:

'use client';

// External Imports
import React, { useState } from 'react';

// Local Functions
function shareContent(content) {
  alert(`Sharing content: ${content}`);
}

// Component Definition
export default function Share({ children }) {
  const [isPending, setIsPending] = useState(false);

  function onClick() {
    setIsPending(true);
    setTimeout(() => shareContent(children), 0)
    setTimeout(() => setIsPending(false), 100);
  }

  return (
    <p style={{ backgroundColor: isPending ? 'yellow' : 'green' }} onClick={onClick}>{children}</p>
  );
}
Enter fullscreen mode Exit fullscreen mode

Starting with a “use client” directive signifies that <Share> is a client component. To help illustrate the lessons of this article as clearly as possible, this component is a simple approximation of what sharing content on social media could eventually look like without the implementation complexity. Specifically, a paragraph is shown with a green background when no sharing is occurring and a yellow background while sharing is in progress. Note the contents of the paragraph rendered by the component’s “children” prop coming from the <BlogPost> server component. In other words, <Share> is a client component whose contents are mostly rendered by the server. Within the onClick() function, the three lines of code change the background color to yellow, then use setTimeout() to wait for the next tick of the event loop to execute shareContent(), and finally wait 100ms to set the background color back to green. The reason for the two setTimeout() calls is to ensure background color changes occur independently from showing the alert() box.

Revisiting an earlier-mentioned topic, how can this series’ 4 previous article subjects be combined into the server components architecture? The architecture does introduce new innovations unique to what came before it such as the server components component type, but it does rely on previous ideas as a basis for its functionality. The first article in the series covered isomorphic rendering and hydration which is a requirement for client components to function. However, because server components execute entirely on the server with no client hydration, they have more in common with old school PHP server-side applications than they do with modern frameworks. Client components are more than just isomorphic components, otherwise application state would be lost during page navigation because new server rendered HTML would overwrite client component HTML. Instead, each client component root can be thought of as a persistent island wherein island state is preserved across page loads; part 2 of this article series covers partial hydration which is a prerequisite for such functionality. Progressive hydration covered in part 3 is an optional performance tweak during page load to delay hydration until a component scrolls into view or the browser is idle, for example; hydrating this way is optional with the server components architecture. Streaming HTML mentioned in part 4 of this blog series is also optional but is made more powerful with components rendered only on the server. Older application architectures had two options for server rendered components: (1) a server-only application with no hydration or interactivity leading to a poor user experience, or (2) an isomorphic application with controlled interactivity and better user experience but hydration potentially reducing initial interaction performance. Streaming HTML with the server components architecture allows further refinement by choosing whether or not hydration is combined with streaming at a component rather than application level by streaming with <Suspense> within server or client components; thus, more fine-grained control over behavior, performance, and bundle size is allowed. Overall, while not all strictly required, the 4 previous article techniques – isomorphic rendering, hydration, partial & progressive hydration, and streaming HTML – are foundational to understanding server components architecture.

Somehow this 5-part, performance-focussed learning adventure has reached an end, and I could not be more thankful for you hanging in there to the end! I certainly hope you learned something new. If you have any questions or feedback about this topic or any others, please share your thoughts with me on Twitter. I would certainly enjoy hearing from you! And be sure to check out Babbel’s engineering team on Twitter to learn more about what’s going on across the department. Keep exploring!

Top comments (0)