DEV Community

Cover image for Monitoring Website Performance Using Lighthouse-CI
Paweł Kowalski for platformOS

Posted on • Originally published at documentation.platformos.com

Monitoring Website Performance Using Lighthouse-CI

We take performance seriously in platformOS. We want our services, including our documentation site, to just fly to devices, no matter how bad the connection is.
To make sure we do not miss any performance issues/regressions, we set up monitoring after every production deploy to have a high-level overview.

Prerequisites

Because of how our infrastructure is architected we are using the following tools:

  • Jenkins CI - runs code on GitHub merge
  • Docker - handles isolation and dependencies needed for Lighthouse
  • kanti/lighthouse-ci - docker image
  • Slack - receives notifications

Note: You can use the same technique on any CI/CD system: GitHub Actions, CircleCI, TeamCity, Codeship, Travis CI - as long as it has Docker support. Without Docker you should use lighthouse-ci directly. Read their official Quick Start guide for more information on that.

Infrastructure

Our documentation site is built, tested, and deployed automatically, every time code is pushed to the master branch.

  1. Code is pushed to the master branch
  2. Jenkins starts its job
  3. Assets are built
  4. Code is deployed to staging
  5. TestCafe tests are run on the staging environment 6a. If they pass, code is deployed to production 6b. If they fail, code is not deployed

Step 7 will test the homepage and one content page and notify us of the results on Slack.

Code

This the final result of our work:

stage('Lighthouse') {
  when {
    expression { return params.MP_URL.isEmpty() }
    branch 'master'
  }

  agent { docker { image 'kanti/lighthouse-ci' } }

  steps {
    sh 'curl https://documentation.platformos.com -o warmup.txt'
    sh 'curl https://documentation.platformos.com/developer-guide/glossary -o warmup2.txt'

    sh 'lighthouse-ci https://documentation.platformos.com > $HOME/tmp/lighthouse-home.txt'
    sh 'lighthouse-ci https://documentation.platformos.com/developer-guide/glossary > $HOME/tmp/lighthouse-content.txt'

    script {
      lighthouseHome = sh(returnStdout: true, script: 'grep perf $HOME/tmp/lighthouse-home.txt').trim()
      lighthouseContent = sh(returnStdout: true, script: 'grep perf $HOME/tmp/lighthouse-content.txt').trim()

      slackSend (channel: "#notifications-docs", color: '#304ffe', message: "Lighthouse Home ${lighthouseHome}")
      slackSend (channel: "#notifications-docs", color: '#304ffe', message: "Lighthouse Content ${lighthouseContent}")
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Explanation

Some of the code you might recognize as standard Linux commands, let's go through the code piece by piece.

when {
  expression { return params.MP_URL.isEmpty() }
  branch 'master'
}
Enter fullscreen mode Exit fullscreen mode

The code will be executed only when code was pushed to the master branch and the build URL was not overridden. It is helpful because sometimes we want to build from the master branch to a different environment for testing purposes.

agent { docker { image 'kanti/lighthouse-ci' } }
Enter fullscreen mode Exit fullscreen mode

Everything in steps blocks execute within the context of the kanti/lighthouse-ci docker container.

sh 'curl https://documentation.platformos.com -o warmup.txt'
sh 'curl https://documentation.platformos.com/developer-guide/glossary -o warmup2.txt'
Enter fullscreen mode Exit fullscreen mode

Run curl to those two URLs to warm up the cache after deploy, if there is any cache.

sh 'lighthouse-ci https://documentation.platformos.com > $HOME/tmp/lighthouse-home.txt'
sh 'lighthouse-ci https://documentation.platformos.com/developer-guide/glossary > $HOME/tmp/lighthouse-content.txt'
Enter fullscreen mode Exit fullscreen mode

Run lighthouse-ci on given URLs and output the report to their respective text files. In this line we could also add treshold below which lighthouse-ci would use exit 1 and fail the build. But we didn't see the need for that.

lighthouseHome = sh(returnStdout: true, script: 'grep perf $HOME/tmp/lighthouse-home.txt').trim()
lighthouseContent = sh(returnStdout: true, script: 'grep perf $HOME/tmp/lighthouse-content.txt').trim()
Enter fullscreen mode Exit fullscreen mode

Take lines with perf in them and save them to a variable. In practice, this means we are throwing away scores from other audits.

If we didn't do it, we would get also accessibility, SEO and best practices scores:

performance: 81
accessibility: 100
best-practices: 100
seo: 100
pwa: 56
All checks are passing. 🎉
Enter fullscreen mode Exit fullscreen mode

We are interested only in performance, but your requirement might be different. Note that you can specify which audits will be run when running the lighthouse-ci CLI.

slackSend (channel: "#notifications-docs", color: '#304ffe', message: "Lighthouse Home ${lighthouseHome}")
slackSend (channel: "#notifications-docs", color: '#304ffe', message: "Lighthouse Content ${lighthouseContent}")
Enter fullscreen mode Exit fullscreen mode

Send the notification to a Slack channel.

Results

Now every time we do a production deploy (automatically), we also check if we did not introduce a speed regression.

Slack notification

Other resources

You can also see source code of our documentation, including Jenkinsfile on GitHub.

Top comments (0)