DEV Community

Cover image for Combining Storybook, Cypress and Jest Code Coverage
Alasdair McLeay
Alasdair McLeay

Posted on • Updated on

Combining Storybook, Cypress and Jest Code Coverage

This post walks through the process of combining Storybook, Cypress and Jest code coverage, and an explanation of why you may want to do this. The full code is available on GitHub.

Code Coverage

Knowing how much and what parts of your code base are covered by some form of test can help direct future testing effort. Using tools such as Codecov can inform reviewers whether a pull request would increase or decrease overall test coverage - serving as a reminder for the reviewer to check that appropriate tests have been written when adding new features or fixing bugs.

However, you may have different types of test each running in a different system, so code coverage metrics from one type of test alone may not provide sufficient information.

Types of test

Picking the right type of test depends on the type of code, you are testing:

Code that consists of a router and connecting to an API may be best tested with integration tests against a mock or prerecorded API e.g. using Cypress, Selenium.

Utility code, such as string formatters, regular expressions and some React hooks or components may be best approached with unit tests e.g. using Jest, React Testing Library.

Dumb components and styles such as React components that mainly exist to provide semantic markup or CSS, CSS files, CSSinJS may be best covered using visual regression tests e.g. using Storybook combined with Chromatic.


A sample application may be laid out as follows:

  • application
    • **/*.spec.js (integration tests)
    • **/styles/*.stories.js (visual regression tests)
    • **/styles/*.js (styles that aren't part of the design system)
    • **/*.test.js (unit tests)
    • **/*.js (application code)
  • utilities
    • **/*.test.js (unit tests)
    • **/*.js (utility code)
  • design-system
    • **/*.stories.js (visal regression tests)
    • **/*.js (design system code)

In such a pattern, you may want to set a few target code coverage metrics, e.g.:

  • utilities is 100% covered by unit tests
  • design-system is 100% covered by visual regression tests
  • application is split up:
    • **/styles/*.js is at least XX% covered by visual regression tests
    • all other code is at least XX% covered by unit or integration tests
  • All code is >90% covered by any type of test

But how do we get these metrics? And how do we get the overall coverage value?

An example setup

I have created a sample project that shows the following:

  • code coverage metrics from Cypress for integration tests
  • code coverage metrics from Jest for unit tests
  • code coverage metrics from Storybook for visual regression tests
  • combining the above 3 coverage reports to show overall code coverage

Integration Tests

Getting code coverage for Cypress tests in a create-react-app application requires the following libraries:

In order to scaffold a Cypress project with some basic config and tests, I used @bahmutov/cly, and referred to the following blog posts:

And as per the @cypress/code-coverage setup instructions, did the following:

Add to your cypress/support/index.js file
import '@cypress/code-coverage/support'

Register tasks in your cypress/plugins/index.js file
require('@cypress/code-coverage/task')(on, config)

In order to automatically start the application when the Cypress tests are run I used the following library:

@cypress/instrument-cra doesn't collect metadata for files that aren't loaded by webpack. I work around this by running a fake test to create an initial file at .nyc_output/out.json before running the Cypress tests.

I added the fake test in the project root in a file called 'fake.test.js':

it("shall pass", () => {});
Enter fullscreen mode Exit fullscreen mode

This test is used by the (slightly convoluted) "coverage:init" test below. It allows us to run create-react-app's code coverage script, but with zero coverage, producing a json file that contains code coverage metadata with no actual coverage. I'll be honest, there's probably a neater way to do this.

The following nyc settings were added to package.json:

  "nyc": {
    "report-dir": "coverage/integration",
    "reporter": ["text", "json", "lcov"],
    "all": true,
    "include": [
    "exclude": [
Enter fullscreen mode Exit fullscreen mode

Along with the following scripts (note the change to the default start script):

    "start": "react-scripts -r @cypress/instrument-cra start",
    "coverage:init": "react-scripts test --watchAll=false --coverage --coverageDirectory=.nyc_output --roots=\"<rootDir>\" --testMatch=\"<rootDir>/fake.test.js\" --coverageReporters=json && mv .nyc_output/coverage-final.json .nyc_output/out.json",
    "test:integration": "cypress run",
    "coverage:integration": "start-server-and-test 3000 test:integration",
Enter fullscreen mode Exit fullscreen mode

Which results in the following:

Integration Test Code Coverage

I can then dig in to these metrics in more detail by opening the lcov report at coverage/integration/lcov-report/index.html.

lcov report for all files

Browsing to src/application/App.js in the report I can see the uncovered branches (yellow) and lines (red):

lcov report for application/App.js

Visual Regression Tests

In order to extract code coverage from storybook, I used @storybook/addon-storyshots to generate Jest snapshots. The snapshots are created each time and not compared to existing snapshots. They're not used to track changes, only as a hook in to Jest to collect coverage.

(Update November 2022: Check out @storybook/test-runner and @storybook/addon-coverage as an alternative to Storyshots)

Storyshots was set up as described in the documentation, with the addition of using 'renderOnly' so that we don't save snapshots to disk.

in ./storyshots/index.js:

import initStoryshots, {renderOnly} from '@storybook/addon-storyshots';

initStoryshots({test: renderOnly});
Enter fullscreen mode Exit fullscreen mode

Then the following script was added to package.json:

    "coverage:visual-regression": "react-scripts test --watchAll=false --coverage --coverageDirectory=coverage/visual-regression --roots=\"<rootDir>\" --testMatch=\"<rootDir>/storyshots/index.js\"",
Enter fullscreen mode Exit fullscreen mode

When you run this script you should see something like this:

Visual Regression Test Code Coverage

Again I can view the lcov report (coverage/visual-regression/lcov-report/index.html) for more detail:

Alt Text
Alt Text

Unit Tests

This is fairly simple as it mainly uses what create-react-app gives you out of the box - though there is a bug in create-react-app@3.4.1 that prevents this from working so best to stick to 3.4.0 for now.

Some minor tweaks are needed:

  1. Tell create react app not to collect coverage from stories by adding this to package.json:
  "jest": {
    "collectCoverageFrom": [
Enter fullscreen mode Exit fullscreen mode
  1. Create a script that collects coverage from all files using react-scripts:
    "coverage:unit": "react-scripts test --watchAll=false --coverage --coverageDirectory=coverage/unit",
Enter fullscreen mode Exit fullscreen mode

When you run this script you should see something like this:

Unit Test Code Coverage


I recommend using Codecov which can merge reports for you and post metrics as a comment on pull requests - however in this example I was looking for something that I could run locally to produce a combined report.

I used istanbul-merge to produce a combined report, using the following scripts in package.json:

    "coverage": "yarn coverage:clean && yarn coverage:init && yarn coverage:integration && yarn coverage:unit && yarn coverage:visual-regression && yarn coverage:merge && yarn coverage:merge-report",
    "coverage:clean": "rm -rf .nyc_output && rm -rf coverage",
    "coverage:merge": "istanbul-merge --out coverage/merged/coverage-final.json ./coverage/unit/coverage-final.json  ./coverage/visual-regression/coverage-final.json ./coverage/integration/coverage-final.json",
    "coverage:merge-report": "nyc report --reporter=lcov --reporter=text --temp-dir=./coverage/merged --report-dir=./coverage/merged"
Enter fullscreen mode Exit fullscreen mode

On running yarn coverage I now get all of the above plus the following merged report:

Merged Code Coverage

Now that I have this report, I can look for areas of concern.

For example, it seems odd to me that even after running all unit, visual regression and unit tests, I still don't have 100% coverage on GlobalStyles.

I can dig in to the lcov report to discover why:

lcov report for GlobalStyles

I have no tests for dark mode! 😢

Top comments (6)

cbishopfv profile image

Looks like they are moving away from storyshots and to @storybook/test-runner.

yannbf profile image
Yann Braga

True! The test runner provides a more modern approach to testing stories, with much more potential, as the tests run in real browser environments, and you get cross browser support, parallel tests, etc.

Also, you can achieve code coverage with the test runner!

mustapha profile image
Mustapha Aouas

What a great idea!
I need to try doing the same with Angular now 😋

negue profile image

If you can get this running for angular, please write a post about it too 🎉

ipap360 profile image
Ioannis Papadopoulos

Great analysis on how an app is structured and the target test coverage and test type depending on the purpose of a component.

Also got a very good idea on how to combine unit/integration test coverage with visual regression tests coverage (running separately based on a static storybook build) with the initStoryshots({test: renderOnly}); trick.


tuliren profile image
Liren Tu

Hi Alasdair, nice article.

We recently developed a tool to help developers access any build artifacts (e.g. the test coverage report you wrote about) on every GitHub pull request: Our example repo for JavaScript was based on a fork of your repo: Thank you very much for that.

We also wrote an article about how to preview typical JavaScript build artifacts: Would like to know how you think about it, and if this tool would be useful for your use case.