DEV Community

Cover image for How We Reduced JS Deployment Time by x25 Times
Max Prilutskiy
Max Prilutskiy

Posted on • Updated on

How We Reduced JS Deployment Time by x25 Times

Imagine slashing your CI/CD pipeline time from 26 minutes down to a mere 1 minute.

Sounds like a dream, right?

Well, I've been there and done that at Notionlytics. All it took was a series of performance optimizations and some creative thinking.

So, let's dive in and explore the techniques I used to achieve this remarkable improvement.

Taylor swift dive

Switching Gears

Pnpm

First things first, I made two crucial infrastructure changes. I bid farewell to Yarn and embraced pnpm, the package manager known for its speed and efficiency. With its three-stage installation process, pnpm swiftly handled dependency resolution, directory structure calculation, and linking dependencies.

The outcome?

Significant improvements in installation times.

Turborepo

Moving on, I swapped NX with Turborepo for managing monorepos. Turborepo's caching tasks proved invaluable for storing results and logs of various scripts like build, test, and lint. Additionally, its parallel task execution significantly reduced build times.

NX is undoubtedly impressive, and truth be told—I became quite accustomed to it. Nevertheless, it's time to progress: Turborepo offers a simpler approach and feels much faster.

Cache Me If You Can

Pnpm

To maximize performance gains, I incorporated caching into our CI workflow.

By utilizing the actions/cache action action, we implemented a strategy to store and retrieve dependencies, preventing redundant installations.

Here's how it was done:

      - name: Configure pnpm cache
        id: pnpm-cache
        run: echo "STORE_PATH=$(pnpm store path)" >> $GITHUB_OUTPUT
      - uses: actions/cache@v3
        with:
          path: ${{ steps.pnpm-cache.outputs.STORE_PATH }}
          key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
          restore-keys: |
            ${{ runner.os }}-pnpm-store-
Enter fullscreen mode Exit fullscreen mode

Turborepo

Now for the exciting part: Turborepo caching. Firstly, the advantage lies in Turborepo's ability to remember the output after an initial build. Subsequent builds only rebuild what has changed.

But wait, there's more!

Turborepo also offers a Turborepo remote caching feature, allowing us to preserve build outputs between CI builds.

While Vercel serves as Turbo's default remote caching destination, I stumbled upon a remarkable find: setup-github-actions-caching-for-turbo - a GitHub Action that enables caching of build artifacts within GitHub Actions itself, without any additional cost!

Here's how to set it up:

      - name: Configure Turbo cache
        uses: dtinth/setup-github-actions-caching-for-turbo@v1
Enter fullscreen mode Exit fullscreen mode

It's as simple as that. The setup is hassle-free, and it works seamlessly.

An excited little girl

Deployments

Cloudflare Pages

Deployments are critical moments in the CI/CD pipeline, so I optimized this stage by transitioning from Vercel hosting to Cloudflare Pages.

Cloudflare Pages emerged as the clear winner due to its parallel deployments using the wrangler CLI, provision of forever-free static asset hosting, and lightning-fast file hashing.

One notable advantage is that the CLI used for deploying to Cloudflare automatically performs hash-checks on assets. This means that if you utilize code-splitting, only the changed chunks will be uploaded, resulting in faster deployments.

Semantic Release

To further streamline deployments, I introduced semantic-release. This tool automates commit tagging and tracks changes since the previous version. As a result, deployments now occur only when new tags are present, saving us valuable minutes.


The benchmark

So, let's talk numbers.

Initially, the monorepo comprising static Next.js sites, a React CRA app, and a Node.js API had a staggering CI/CD pipeline duration of 26 minutes and 49 seconds, regardless of the code changes. As the project grew, this time only increased.

However, after implementing all the performance optimizations, here are the results we achieved:

  • Without cache: 12 minutes and 6 seconds (a remarkable 50% reduction!)
  • With cache:
  • Node.js: 7 minutes and 1 second (still relatively slow due to building and pushing a Docker image)
  • React CRA app: 3 minutes and 22 seconds
  • Next.js app: 1 minute and 19 seconds
  • Infrastructure/docs/chore changes: just 1 minute and 2 seconds

I'd say those are some pretty good improvements: not only did we slash our CI/CD pipeline duration by more than half, but we now have a much faster response time for changes in any one part of the monorepo. We can now deliver features and bug fixes to production much faster, with much less stress.

Thanks for reading!

What are your thoughts on these outcomes? Any tips you want to share? Let's continue the conversation in the comments below.


P.S.: If you liked this - you might like some other things I've done recently:

AI-powered changelog updates on Slack, every Monday, with GitHub Actions

You'll never have to deal with outdated TODO comments again

Follow me on Twitter: @MaxPrilutskiy

Top comments (0)