DEV Community

Split Blog for Split Software

Posted on • Originally published at on

Instant Feature Flags With Next.js

The Problem With Feature Flags When Using Next.js

Do you use feature flags to get the benefits of modern, high-velocity software deployment? You do, right? You have probably noted a problem when using next.js.

Next.js provides awesome “instant” page startup using techniques like server-side rendering. But as part of that startup, the browser must retrieve the current settings of your flags. As a result, you’ll be able to enable or disable features appropriately for that user.

This may leave your app in a challenging “hurry up and wait” situation. You deliver your app’s page instantly. But then must wait for some kind of network call to determine the values of your flags. A CDN can make this experience significantly faster, but if you are looking for the absolute fastest startup experience, even that hundred millisecond round trip to the CDN may seem undesirable. You’ve invested in technology to give great page-load experiences, and then slowed that down with your necessary infrastructure.

Here, I’ll talk about a technique you can use to “have your cake and eat it too”. By using Split’s feature flagging SDK’s you can get both instant page loading, and instant (within a couple milliseconds) feature flag settings all without additional network calls.

You can also use feature flags on the back end parts of your application managing data persistence. For example, in the mid-tier where the server-side rendering or even where static site generation is done, as well as in the browser.

A Refresher on Feature Flags

A feature flag is, usually, a conditional statement in your code that “decides” whether to show a new or an old version of some feature. This allows you to deliver multiple versions of your application simultaneously.

This is valuable because it increases your development velocity by deploying every change (e.g. every PR) into production as soon as it is approved. That is, you can deploy incomplete features into a production environment without fear. Be confident that it will never be executed because it is on the “wrong” side of that conditional statement. As a result, this removes the need for long-lived feature branches and other practices that slow down deployment velocity.

Whether that condition evaluates to “true” or “false” depends on some kind of configuration. With a SaaS-based flagging system, that configuration can often express elaborate rules for when a feature should be shown. For example, show the “true” version of the feature only to ten percent of the users in a particular geography who are also part of a beta program.

This, in turn, reveals a second great benefit of feature flags. You can separate deployment from releasing of the feature. Therefore,you can control that release via remote configuration, with precise control over that release process. Rather than turning a feature on for all users and hoping all goes well, instead you can turn it on gradually. You can make the feature available to 1% of users, then 5% to 10% and so on. If something goes wrong with the new feature, you can turn it back off immediately. With these tools, releasing can be a no-stress process.

The capstone of feature flags, however, is the ability to measure changes in your application and correlate those back to individual features. Did the dollar value of your customer purchases go down? Was there a particular feature that caused this? Without feature flags, how would you know if an individual feature had either good or bad impact? Without that information, how can you make good business decisions? When coupled with a feature-flag aware A/B testing system, you can run experiments to discover all of the above while you release new features.

Given all these benefits, why would you not be using feature flags?

Instant Flags

Ordinary Behavior of the Split SDK

One of the great benefits that the Split feature flag SDK offers over some other approaches is the knowledge of whether a flag is “on” or “off”. This is determined entirely by local calculations within the SDK.

With local calculation, your code can decide whether to show a feature or not in the space of a couple milliseconds. This is possible since it does not have to communicate with a SaaS server to make these decisions. This has significant privacy benefits since the information used to inform this flag decision never leaves the client device. But for our purposes, it also means that once your app is up and running it is always going to execute at the fastest rate possible. As a result,this will deliver a smooth experience for your users.

For this to work, the SDK needs to get a copy of the feature flag definitions at application startup.

When the Split SDK first downloads the feature flag definitions, it stores a copy of those definitions in local storage. This will allow it to start up faster the next time the app is run in the same brower. Since an initial set of the flag definitions is available, the application can start up without waiting for the definitions to be downloaded.

Faster Than Really Fast

The picture we’ve summarized so far already is a really fast feature flag system. On all but the first run, your application already has a set of flag definitions available. Thus a network call is not needed to get started, and no network calls are needed for feature flag evaluation. This is already fast, with less than a second of overhead in the worst case (first time start up).

But for applications seeking the absolute fastest performance, can we eliminate the sub-second overhead? After all, nothing is better than better, right?

Suppose we were to deliver a copy of the feature flag definitions during the initial server-side rendering of a page. Then install them in LocalStorage before starting up the Split SDK. This seems like it would give us a complete feature flag solution with absolutely no overhead. Is it possible?

It is possible! Let’s take a look at one solution.

Example Solution

We can take advantage of all this and the server side rendering support in next.js:

  • Keep a cached and up to date copy of all the feature flag definitions on the back end.
  • When responding to a page request, include a copy of the definitions in the server-side rendered page.
  • In the browser, store these definitions in LocalStorage, then start up the Split SDK, so it uses the definitions previously stored in LocalStorage instead of performing a network call.

Back End

On your server side, you need to keep an up to date version of all the feature flag definitions. You might store this information in Redis, or some other store accessible from your next.js instances.

For example, the following code fragment retrieves the flag definitions every second and stores the data in some durable storage. The point is to periodically retrieve any changes to the flag definitions, and then to integrate those into the cache.

import axios from 'axios';
import cron from 'node-cron';

import { serverSdkApiKey} from './constants';

let cacheOfCache = { splits: [], since: -1, till: -1}
let fromEpoch = -1;

function writeCacheInfo(requestResponse) {
   for (const newFlag of requestResponse.splits) {
       let indexOf = -1;

       cacheOfCache.splits.find((cachedFlag, index) => {
           const found = ===;

           if (found) {
               indexOf = index;

           return found;

       if (indexOf === -1) {
       } else {
           cacheOfCache.splits[index] = newFlag;
   cacheOfCache.till = fromEpoch;

async function makeRequest() {
   const config = {
       method: 'get',
       url: ''+fromEpoch.toString(),
       headers: {
         'Authorization': `Bearer ${serverSdkApiKey}`,
         'Accept-Encoding': 'gzip, deflate, br',
   let res = await axios(config)
   fromEpoch =;

   if (res.status !== 200) {
       throw new Error('Error when fetching feature flag rules');

makeRequest().then(writeCacheInfo).catch(e => console.log(e.message));

cron.schedule('*/1 * * * *', () => {
   makeRequest().then(writeCacheInfo).catch(e => console.log(e.message));
Enter fullscreen mode Exit fullscreen mode


To make this work, we create a custom application which delivers the feature flag definitions via server side rendering. It installs those into local storage and then starts up the Split SDK:

import React from 'react'
import App from 'next/app'
import { SplitFactory } from '@splitsoftware/splitio'
import { clientFEATURE, clientSdkApiKey } from '../constants'
import { getFlagRules } from '../persistence/getFlagRules';
import { populateLocalStorage } from '../populateLocalStorage';

class MyApp extends App {

 state = {};

 async componentDidMount() {
   const { pageProps, serverSideFlagCache } = this.props;
   const { userId } = pageProps;
   const startTime =;

   // Store the definitions in local storage

   // Start the SDK
   console.log("Creating SDK factory")
   window.split = window.split ||
       core: {
         authorizationKey: clientSdkApiKey,
         key: userId,
       storage: {
         type: 'LOCALSTORAGE'

   const splitClient = window.split.client()

   splitClient.on(splitClient.Event.SDK_READY_FROM_CACHE, function () {
       const endTime =;
       console.log(`SDK Ready from cache. ${endTime - startTime}ms to get ready.`);

   await splitClient.ready()
   const treatment = splitClient.getTreatment(clientFEATURE);

   this.setState(() => ({  
     sdkReady: true,      
     feature: treatment

 render() {
   const { Component, pageProps } = this.props;
   const { feature, sdkReady } = this.state;

   return (<Component {...pageProps } clientTreatment={feature} isReady={sdkReady} />);

// Server-side rendering the flag definitions
MyApp.getInitialProps = async function () {
  const userId = getUserId();
  const serverSideFlagCache = getFlagRules();
  return { pageProps: { splitName: clientFEATURE, userId }, serverSideFlagCache }

export default MyApp;
Enter fullscreen mode Exit fullscreen mode

Finally, the populateLocalStorage() function takes the server-side rendered flag definitions and reconstructs the cache in LocalStorage.

const populateLocalStorage = (cache) => {
   const trafficTypeCounts = {};
   let splitsUsingSegments = 0;

   localStorage.setItem("SPLITIO.splits.till", cache.till)
   localStorage.setItem("SPLITIO.splits.lastUpdated", cache.till)
   localStorage.setItem("SPLITIO.splits.usingSegments", splitsUsingSegments)

   for (const split of cache.splits) {
       localStorage.setItem(`SPLITIO.split.${}`, JSON.stringify(split))
       trafficTypeCounts[split.trafficTypeName] = 
         (trafficTypeCounts[split.trafficTypeName] || 0) + 1;

       for (const condition of split.conditions) {
           if (condition.label.includes('segment')) {

   for (let ttInfo of Object.entries(trafficTypeCounts)) {
       localStorage.setItem(`SPLITIO.trafficType.${ttInfo[0]}`, ttInfo[1]);
   console.log("Local storage is updated")
Enter fullscreen mode Exit fullscreen mode


How fast does this make the application startup?

This is the console output from several runs I just made:

  • SDK Ready from cache. 3ms to get ready.
  • SDK Ready from cache. 4ms to get ready.
  • SDK Ready from cache. 4ms to get ready.

So, we can see it takes perhaps as many as four milliseconds to store the definitions, start up the SDK and be ready to go. That is the “cost” of running a cutting edge feature flagging system here is less than one re-rendering of a simple app’s page. Something even your most detail-oriented users will not notice.


For applications seeking the absolute highest performance, it is possible to use feature flags, while paying a tiny number of milliseconds of “overhead”.

You do not have to make a tradeoff between the highest performance apps and taking advantage of the best of modern developer-velocity practices.

Top comments (0)