DEV Community

Lucas Correia
Lucas Correia

Posted on

Client-first RSC: A scalable solution for data aggregation


First of all, we need to be in the same page on the fact that RSC and SSR are not the same thing.

RSC is a strategy to run part of your react application on the server-side. Server components are isolated in the cloud and will never be sent to the client. Same is valid for server actions.

SSR is a strategy for rendering your client components in the server. It generates HTML. Those components are sent and will run in the browser.

Do you need SSR?

SSR is really good if you have performance-critical applications. When you have pages that need to load on really poor connections. On really bad devices. If you do have this scenario, then you can drop this article and assume SSR is for you. Otherwise, keep reading.

SSR scalability

When trying to pitch next, one of the things that puts me in a corner is dealing with scalability. You're rendering your app on the server. You need computing power for that. You need to think about scalability, something you wouldnt need to worry if you were serving static assets instead.

A co-worker gave me a scenario of thousands of parallel requests happening to our FE and asked how to solve this. At the time the only thing I could think of was horizontal scaling, but I believe this is not the answer he was expecting.

Then why RSC?

Now that I've explained myself on why I dont wanna use SSR, let me explain how Im planning to use RSC without facing similar issues: By avoid using server components.

TBH, the only part that really interests me in the whole RSC hype is server actions. If you have a server component, when the user requests a page, you will be running some code in your cloud. This is what we are trying to avoid.

Why server actions?

The bauty of server actions is how simple it is for you to make a client-side function become a server-side function. All you gotta do is add a "use server" directive. A one-liner.

My idea is to move the data aggregation layer into a function. The default behavior of the application Im trying to build is to delegate everything to the user machine, so we run that function in the client.

If a specific page needs more performance (N+1, etc), we can just decide to run that function in the server. All it takes is a one-liner.

How does it solve scalability?

Considering that SSR is disabled, your server only needs to run server actions. But whats the difference when comparing against running SSR?

The idea is that you wont start with server actions. Your functions will start in the browser, making requests to the microservices directly. Initially, your server will be in charge of serving static assets and thats all.

As needed (and only if needed), you can make a specific page use a server action instead of a client function, or even enable SSR only for that page.

Optimizing only what is absolutely needed helps alleviate server load and save on cloud costs.

Why not GraphQL?

GraphQL is a really good BFF strategy that brings a lot to the table. I've consired using it myself a lot of times, but always decided to try some other approach because of its query language.

What puts me off is that it requires me to follow a set of rules and structure my data in a way that complies with them.

Im not 100% free on the way I consume the API cause I need to query things following another set of rules.

If you start using it early, you will be able to allow the tool to dictate how your data will look like. If youre trying to adopt in an pre-existent project, its no easy thing.

Solving the data aggregation problem

I have a situation at work where our BE uses microservices. They dont provide a BFF, so FE needs to load entities from multiple services. You can get a glimpse of how things currently work in this other article I wrote.

The great thing about GraphQL is that if you configure things correctly, you dont need to write code to aggregate entities. Apart from performance and scalability, I want to be able to have this level of simplicity when making my requests.


I know... Its 2024 and Im talking about JSON:API...

What I want do accomplish here is to have a standard which allows me to know what relationships are available for a given entity.

Not only that, but a standard means that I can expect data to be structured in a specific way. And with that, I can create generic solutions to accomplish the level of simplicity I want to have when aggregating data.

Maybe not JSON:API, but I do need a standard.

Generic solutions and caching

Im talking about aggregating data from multiple services. Multiple entities. But what about caching?

If my client has a aggregated table with information from entity X and Y. If for some reason, entity Y updates in the same context of that table (e.g. in a modal). Do I want to refetch all the aggregated entities by calling that server function?

The answer is no. The solution Im seeking has two steps: Fetching and querying. With that, When entity Y updates, I dont need to refetch the whole table. I can just re-query from a local storage. This can be done on client-side, which improves scalability even more.

Reinventing the wheel?

While Im writing this, I can't stop thinking that Im trying to implement exactly what GraphQL is. I can even bet that there is some JSON:API client doing exactly what Im talking about:

But, my goal is to have a solution that fits my needs at work. I have my reasons why I dont want to use GraphQL. My BE team may have their reasons on why they dont want to follow JSON:API strictly. My SRE team has their reasons on why they dont want to use SSR.

In the end, the solution Im proposing here is made to solve the problems Im facing. Its a glove. There is no silver bullet.

To be continued

I will be writing a separate post specifically on data management tools. Probably exploring a few of the JSON:API client libraries I shared. I wont go into details here for a few reasons:

  • Im not there yet. Im still trying to find the best tool for my use-case.
  • I want to share a solid solution/strategy and not only opinions based in a few hours of research.
  • This post is already too big.


Maybe with all the good things that RSC brings to the table its time to start looking back to check how it improves pre-existent tools.

It opens a door for client-first solutions where you can gradually move things to the server as-needed. With the correct set of tools, you can accomplish similar performance as BFF in critical scenarios while having scalability and simplicity of static apps by default.

We must go for simplicity first. We must optimize only when necessary. Optimization path should follow the first rule.

Top comments (2)

mensonones profile image
Emerson Vieira

Very good, Lucas!

rolagit profile image
Rocco Lagrotteria

Have you considered adoption of an API Gateway (KrakenD for example) to get rid of the data aggregation problem?