DEV Community

Cover image for Automating Accessibility Testing With Playwright
Dennis Whalen for Leading EDJE

Posted on • Originally published at dennis-whalen.com

Automating Accessibility Testing With Playwright

Introduction

In my previous post, I showed you how to use the axe DevTools chrome extension to test for accessibility issues on a webpage. Today, I'm going to show you how to do the same thing with Playwright, enabling you to automate accessibility testing in your CI/CD pipeline. Everything I'm going to demo can also be found in my sample repo.

The axe-core package

The axe-core package is a JavaScript library that can be used to run accessibility tests on a webpage. It's the same library that powers the axe DevTools extension that we looked at in my previous post, but it can be run programmatically in a variety of environments.

We're going to use it with Playwright, but axe-core packages exist for a number of automated testing frameworks, including Cypress, Selenium, WebdriverIO, and more.

Our first Playwright accessibility test

Including accessibility tests in your Playwright suite is as simple as adding a few lines of code. We've got a sample website that we're going to test, and we're going to use Playwright to navigate to the page and run an accessibility check.

Here's an example test that navigates to the webpage and runs an accessibility check:

import { test, expect } from '@playwright/test';
import AxeBuilder from '@axe-core/playwright';

test.describe('Accessibility University testing', () => {
  test('Full page scan should not find accessibility issues', async ({ page }) => {
    await page.goto('https://www.washington.edu/accesscomputing/AU/before.html');
    await page.waitForLoadState('networkidle');
    const accessibilityScanResults = await new AxeBuilder({ page }).analyze();
    expect(accessibilityScanResults.violations).toEqual([]);
  });
});
Enter fullscreen mode Exit fullscreen mode

A few things to note:

  • waitForLoadState('networkidle') waits for the page to finish loading before running the accessibility check. This is important because we want to make sure the page is fully rendered before we check for accessibility issues.
  • new AxeBuilder({ page }).analyze() runs the accessibility check on the page and returns the results.
  • expect(accessibilityScanResults.violations).toEqual([]); indicates that we are expecting 0 accessibility violations. If violations are found, this test will fail.

Since our sample website is specifically designed to have lots of accessibility issues, we're expecting this test to fail, and it does!

Failed test results in CLI

Reporting

OK so we know that our test is failing, but what accessibility issues are being found? The accessibilityScanResults object contains a lot of information about the accessibility issues that were found, and it's pretty easy to create a report with tweaks like this:

import { test, expect } from '@playwright/test';
import path from 'path';
import AxeBuilder from '@axe-core/playwright';
import { createHtmlReport } from 'axe-html-reporter';

test.describe('Accessibility University testing', () => {
  test('Full page scan should of BEFORE page', async ({ page }) => {
    await page.goto('https://www.washington.edu/accesscomputing/AU/before.html');
    await page.waitForLoadState('networkidle');
    const accessibilityScanResults = await new AxeBuilder({ page }).analyze();
    createHtmlReport({
      results: accessibilityScanResults,
      options: {
          outputDir: path.join('e2e', 'test-results', 'accessibility-results'),
          reportFileName: `my-report.html`,
      },
  });
    expect(accessibilityScanResults.violations).toEqual([]);
  });
});
Enter fullscreen mode Exit fullscreen mode

createHtmlReport is imported from the axe-html-reporter package, and it creates an HTML report of the accessibility issues found. The report is saved to the e2e/test-results/accessibility-results directory with the name my-report.html. Here's a snippet of what the report for our test looks like:

sample accessibility report

This is just the first page of the report, and there are lots of details in following pages. You'll notice that there are 50 total violations, which matches what we saw when we used on the axe DevTools extension in my previous post.

The report also provides a detailed breakdown of the violations, including the rule that was violated, the impact of the violation, and a description of the issue.

Just as we did with the Chrome extension, we can use this report to identify and fix the accessibility issues on our website.

Adding your tests to the CI/CD pipeline

If you have some general familiarity with Playwright, you probably already know how to run your tests in your CI/CD pipeline. If you don't, you can check out the Playwright documentation for more information. Since these accessibility tests are just Playwright tests, you can run them in the same way you run your other Playwright tests.

Conclusion

In this post I showed you a basic example of how to include accessibility tests in your Playwright suite. Don't forget that axe-core is not limited to Playwright, and can be used with a variety of automated testing frameworks, such as Cypress and Selenium.

Some additional things to note:

  • Although I was just testing in Chrome, axe-core supports all major browsers. You should consider testing in multiple browsers and viewports to ensure that your website is accessible to all users with all devices.

  • The scan can be configured to include or exclude certain rules, for example color-contrast rules.

  • You can limit the scan to a subset of the page by using the include and exclude options.

  • You can limit the scan to a subset of the WCAG guidelines by using the rules option.

Finally, it's probably appropriate to mention that a clean accessibility scan is a great first step in making your website accessible, but it's not a silver bullet. For example, take the test to verify the existence of alt-text for images. The tool can tell you if an image is missing alt-text, but it can't tell you if the alt-text is meaningful.

Manual validation via a screen reader is an additional step you can do to ensure that your website is accessible. Automation tooling exists that can allow you to automate the manual screen reader testing process, but that's a topic for another post. Stay tuned!


Smart EDJE Image

Top comments (2)

Collapse
 
chris_devto profile image
Chris

I'm a massive fan of using whatever you can to catch Accessibility issues so this is great.

However automation tends to only spot about Accessibility 30%-50% of issues, these estimates are provided by others in the industry. Sometimes the worry is that manual testing won't be done.

Collapse
 
dwwhalen profile image
Dennis Whalen

Agreed Chris. I've seen those numbers also, and that's what I was referring to there in the last couple paragraphs. Thank you for reading!