Performance optimalization is an area of software development that you will most probably meet at some point of your professional career as it is not related to any frontend or backend frameworks strictly. In this tutorial I won't be talking about how you can improve the performance of your Vue.js or Nuxt.js applications (it is explained very well in the tutorial series by our man Filip Rakowski, CTO of Vue Storefront, in which he provides many tips and tricks on how you could improve performance of your website:
https://vueschool.io/articles/series/vue-js-performance/).
You can also checkout the article I have recently written about improving performance of Vue.js and Nuxt.js applications in a form of a useful checklist: https://dev.to/baroshem/performance-checklist-for-vue-and-nuxt-cog
These recommendations will increase the performance of your application but to make sure that these any other future site features will improve the performance you have to constantly measure it.
You can do it the old fashioned way by running Lighthouse in your browser each time you want to make a change to your website but this will require time and also the results of such tests may differ drastically from one another (as developers may be using different devices to run the audit on). The better approach here would be to implement some kind of Continuous Integration script that would conduct these Lighthouse audits for us. Thankfully, there are already tools available that you can use to achieve that.
For this tutorial I will be using Nuxt.js, Lighthouse CI, and Github Actions
but you can adjust it to your code repository workflows. In this tutorial I won't be covering the topic of assertions and budgets, but I will create separate articles soon (Oops, spoilers). If you are interested into Lighthouse configuration you can visit this page -> https://github.com/GoogleChrome/lighthouse/blob/master/docs/configuration.md
Nuxt.js
We will create a simple Nuxt.js project with below command:
yarn create nuxt-app <project-name>
For the sake of this tutorial we will generate a basic Nuxt.js project with just a homepage.
Navigate to your project, run it, and check if everything is working as expected:
cd <project-name>
yarn dev
Lighthouse
You will need to install and authorise the Lighthouse CI application for Github. Make sure to copy the generated LHCI_GITHUB_APP_TOKEN
as we will need it later.
https://github.com/apps/lighthouse-ci
After authorisation you should see a page like this:
Install @lhci/cli package
yarn add -D @lhci/cli
Add Lighthouse CI commands to your package.json (for CI and local testing)
// package.json
...
"scripts": {
"dev": "nuxt",
"build": "nuxt build",
"start": "nuxt start",
"generate": "nuxt generate",
"lhci:mobile": "lhci autorun",
"lhci:desktop": "lhci autorun --collect.settings.preset=desktop"
},
Create lighthouserc.json
with configuration for Lighthouse CI
// lighthouserc.json
{
"ci": {
"collect": {
"startServerCommand": "yarn build && yarn start",
"url": ["http://localhost:3000/"],
"numberOfRuns": 3
},
"upload": {
"target": "temporary-public-storage"
}
}
}
Let's stop for a second here to explain how we are configuring the lighthouse to conduct audits.
collect:
-
startServerCommand
- a command that we want Lighthouse to use in order to start testing. In our case we will build our Nuxt project for production and start it. -
url
- An url address that we want Lighthouse to conduct audits on. For the sake of this tutorial we will be usinghttp://localhost:3000/
to test just the homepage but you can also setup other routes here likehttp://localhost:3000/categories
-
numberOfRuns
- A number that defines how many times Lighthouse should test the selected url and create a median out of these results.
upload:
-
target
- where do we want to upload the result of our Lighthouse audit report. By default is set totemporary- public-storage
Test if the Lighthouse audit is working as expected
yarn lhci:desktop
The command should log following result:
And when we visit the link that was created by Lighthouse in the terminal we should see something like this:
Well done! Now you have successfully conducted Lighthouse audit locally. As our final step, we will create a Github workflow to run Lighthouse CI on every pull request to main
branch.
name: CI
on:
pull_request:
branches:
- main
jobs:
lighthouse:
name: Lighthouse CI
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Install dependencies
run: yarn
- name: lighthouse mobile audit
run: yarn lhci:mobile
env:
LHCI_GITHUB_APP_TOKEN: ${{ secrets.LHCI_GITHUB_APP_TOKEN }}
- name: lighthouse desktop audit
run: yarn lhci:desktop
env:
LHCI_GITHUB_APP_TOKEN: ${{ secrets.LHCI_GITHUB_APP_TOKEN }}
Now, whenever we create a pull request to main
branch with this workflow implemented we will automatically trigger a Github Action that will be conducting Lighthouse audits.
Remember the step about authorising Lighthouse application for Github? If you do not have a secret in your repository you would still be able to trigger Github Action but you will not get a nice looking status check from Lighthouse with all metrics. No worries, you would still be able to see the report but you would have to go to the details of the action and go to the link directly.
When we add a token in repository settings it should be visible like this:
To confirm that we did all steps correctly we should see a status report from Lighthouse directly in the pull Github Actions output of a pull request
** Keep in mind that the status report from Lighthouse application for Github will provide only one status report even though we have done two tests (for desktop and mobile devices) so you would have to check the second report manually. If you have found a way to display multiple status reports please let me know in the comments and I will update the article accordingly :)
Summary
You have successfully implemented Lighthouse CI auditing that can be triggered both locally and as a Github Action.
This approach would suit most of the cases however to achieve more accurate performance audits you should be conducting Lighthouse tests on a dedicated server to avoid results being affected by the machine capabilities. In other words, if you are running Lighthouse audits on a repository where there are several pull requests/workflows/pushes going on, the result of this audit may not be accurate and this is what we want to avoid. For that you would need a separate machine with Lighthouse Server installed on it. So on a pull request you would trigger this machine to conduct a performance audit and return response to your repository.
In the upcoming articles I will be covering the topic of setting a performance budget and performance assertions.
Below you can see a demo repository with the code from this article:
https://github.com/Baroshem/nuxt-lighthouse-ci
Bonus: Using Github Action instead of npm package
Instead of using npm package you could use a Github Action that would esentially do the same thing but wi. The downside to that approach is that you won't be able to test your project locally with lighthouse (unless you are using local github actions package like https://github.com/nektos/act)
name: CI
on:
pull_request:
branches:
- main
jobs:
lighthouse:
runs-on: ubuntu-latest
needs: deploy
steps:
- name: Checkout code
uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Install dependencies
run: yarn
- name: Build and start the project
run: yarn build && yarn start
- uses: actions/checkout@v2
- name: Audit URLs using Lighthouse
uses: treosh/lighthouse-ci-action@v7
with:
urls: http://localhost:3000
uploadArtifacts: true # save results as an action artifacts
temporaryPublicStorage: true # upload lighthouse audits to google temporary storage
Top comments (3)
Really powerful but the documentation is sometimes messy especially if you want to have the test to be running in your CI with the core vitals. 😅
Thanks for this one! 🙏🏻
Oh this is cool. Thanks!!
I am glad you liked it!