DEV Community

Craig Morten
Craig Morten

Posted on • Originally published at Medium

Automating a11y testing: Part 1 — Axe

Earlier this year I gave a talk at the John Lewis Partnership Tech Profession Conference 2023 on the topic of automating web accessibility (a11y) testing.

In this series I will cover the contents of this talk, kicking off with insights into the Axe suite of tools, followed by articles covering the wider a11y testing landscape: from visual regression testing for magnification and visual deficiencies, to emerging technology in the screen reader automation space.

I am not the user

Before going any futher I’m keen to express a disclaimer for this series: "I am not the user"!

I don’t have any training beyond "on the job" learnings, I am not a user of assistive technology in my day to day life outside of QA, and other than a mild astigmatism I consider myself able-bodied and neurotypical. Somewhat brings forth imposter syndrome when writing on the topic of a11y testing! Nevertheless, it’s something I’m passionate about.

Throughout this series I will be mostly focusing on the automation side of a11y testing, but it is also key to remember the importance of manual validation and exploratory testing. At the end of the day it is real people that are visiting and using your site, so it is incredibly important to ensure the quality of their experience, not just to tickbox that automation has passed. In fact, current tooling can only cover around 25% of WCAG requirements (granted an old article, but still holds true!) and although later in this series we will explore some emerging technology to start pushing that coverage higher, there are still numerous aspects of the customer experience that can only be tested by a real person.

With that all covered off, let’s have a wander through the world of automated web accessibility testing!

Current state of play


It wouldn’t be an article about web accessibility testing if the Axe suite by Deque wasn’t mentioned. Putting simply, this tooling has somewhat of a monopoly at the moment on accessibility automation, and for good reason.

"The axe family of tools help you check for digital accessibility" —

Outside of very specific, dedicated tools it has the best coverage out there, good APIs and interfaces, and although I think the technical documentation could be better (likely personal taste — it always ends up being there, just not structured how I expect), it is far from bad and is made up for by a wealth of community tutorials, articles, and repo examples.

Beyond their axe-core package Deque also maintain a number of dedicated packages for the majority of popular frameworks in the web test automation space, namely:

To give you a flavour, here is a snippet of how you might use the Playwright integration to perform static analysis on the live Waitrose Groceries site to access the accessibility of the primary navigation menu, lovingly referred to as the "mega menu":

import { test, expect } from '@playwright/test';
import AxeBuilder from '@axe-core/playwright';

test('mega menu should not have a11y violations', async ({ page }) => {
  await page.goto('');

  await page.getByRole('button', { name: 'Groceries' }).click();

  await page.locator('#megamenu').waitFor();

  const accessibilityScanResults = await new AxeBuilder({ page }).include('#megamenu').analyze();

Enter fullscreen mode Exit fullscreen mode

Hopefully a very standard looking test — we navigate to the desired page, interact with the Groceries dropdown button, and wait for the mega menu to be visible. The Axe part is then remarkably straight-forward to get started with — we construct a new AxeBuilder, tell it to inspect the mega menu element tree, and run the analysis. Across the other framework packages the API usage is very similar.


Now the eagle eyed of you will have noticed there was a popular framework missing from the list of Deque owned packages for Axe — Cypress!

Don’t worry, there is a really good community package to support Cypress as well in the form of the cypress-axe package.

The interface is a little different to how you use the other Axe packages, but personally I almost prefer the style which works quite naturally with the Cypress API style. Let’s see how we might test the accessibility of a John Lewis Search Product Listing Page (PLP):

describe('PLP Search A11y Violations', () => {
  beforeEach(() => {



  it('should not have a11y violations on load', () => {
    cy.checkA11y(SELECTOR, AXE_RUN_OPTIONS);
Enter fullscreen mode Exit fullscreen mode

As is often the case with Cypress, the feedback loop from cypress-axe is also up there with clear logging explaining the issues when you are using the Cypress UI.

For example, in this failed test we can see the error description, some guidance, a URL for more information, and if we were to drill into the Nodes array it would give us pointers to the exact elements that are in violation of WCAG.

Cypress UI with a failed test for "Has no a11y violations after button click". The test lists the error "A11Y ERROR! Heading-order on 2 Nodes" with message "1 accessibility violation was detected: expected 1 to equal 0". This error has been pinned and in the Chrome developer tools the Console tab is open listing further details on the error, including the error id, impact, tags, description, help description and url, and the relevant DOM nodes.


So what if you don’t currently use one of the aforementioned frameworks for your site?

In cases where you don’t have a setup ready to plug and play with one of the previous packages, and perhaps you’ve also not got the knowledge or resource to spend building up a suite using one of those frameworks (though these days the developer experience for setting up is pretty good!), then I would recommend taking a peek at Pa11y.

"Pa11y is your automated accessibility testing pal. It runs accessibility tests on your pages via the command line or Node.js, so you can automate your testing process." —

The nice thing about Pa11y is you can run it straight from the command line making it natural to use for scripting, or for simple smoke checks in CI. It also ships a Node package so you can write a quick test in JavaScript or Typescript. Because it is a "wrapper" you can also benefit from it’s dual coverage utilising both Axe and HTML Code Sniffer (htmlcs), another static analysis tool. The documentation for Pa11y is also really good.

Let’s take a look at an example setup to test the page accessibility with the mega menu open:

import pa11y from 'pa11y';

  runners: ['axe', 'htmlcs'],
  standard: 'WCAG2AAA',

async function runPa11yTest() {
  try {
    const results = await pa11y('', {
      actions: ['click element #drop-down-nav', 'wait for element #megamenu to be visible'],

  } catch (error) {

Enter fullscreen mode Exit fullscreen mode

Pa11y’s actions API for perfoming customer interactions is a little restricted, but it does has you covered for basic behaviours like clicking, typing, and waiting for something to be visible. Because it uses English based instructions, it has a similar feel to using BDD testing libraries (e.g. think Gherkin syntax for Cucumber) which lowers the barrier to entry for developers or even non-developers to easily tweak or extend the scope of tests — caveated of course if English is not your first language!

Unfortunately the style of this actions API does mean the tool doesn’t sit well within other frameworks — if you’re trying to use Playwright or Cypress to first act on the page for setup it will just get ignored. What Pa11y does under the hood is use Puppeteer to spin up it’s own Chrome tab to instrument and act out the instructions and a11y testing, so anything set up outside of Pa11y’s actions array will effectively be ignored.

But if you’re performing integration testing through the likes of Jest, Vitest, Mocha, etc., i.e. a testing framework that doesn’t support it’s own browser instrumentation, this can be a nice fit to extend a suite of tests to include a11y coverage.

In summary:

  • Conveniently wraps both Axe and htmlcs

  • Really good documentation

  • Simple to use action, though somewhat restricted

  • Doesn’t play so well with others

Axe browser extension

Moving out of what some folks might strictly class as test automation, it’s worth discussing some of the other tooling in the Axe family.

John Lewis website open on the   coffee table product listing page in Chrome. The developer tools pane is open on the axe DevTools tab showing the overview of an Axe scan. The report lists 4 accessibility issues in a list of expandable sections, with one for "Images must have alternative text" expanded to show details of the issue. A highlight button is toggled visually highlighting the image on the page that is missing an alt tag.

To complement the Axe packages, Deque also have a really nice Axe browser extension. Some features are premium, but the "scan all" feature meets most, if not all, needs for supplementing exploratory testing with a degree of automation, taking the heavy lifting away from manually inspecting HTML, colours, etc. to try and work out if components are compliant.

Axe Linter

There is also a VSCode plugin for an Axe Linter which can be a useful tool in proactively writing accessible code write at the early development stage — eliminating the slower feedback loop of only finding violations at your integration test stage in CI say.

A VSCode window with code for a React based Image component. The cursor is hovered over the img element JSX which doesn't have an alt prop. The component has a red error underline and a hover tooltip which reads: "Axe Linter (image-alt): Ensures img elements have alternative text or a role of none or presentation (dequeuniversity/image-alt)".


Lastly something that hopefully might be familiar to most readers!

If you’ve ever used the Accessibility reporting feature of Google Lighthouse, it uses Axe under the hood to drive the tests and provide the nicely displayed violation information.

This is not just the case for the Chrome DevTools instance of Lighthouse — the browser extension, CLI, and Node package all make use of Axe for the accessibility score and reporting features.

Waitrose Groceries favourites page open in Chrome using iOS mobile simulation. The DevTools panel is open on the Lighthouse tab displaying an Accessibility score for the page. WCAG violations are listed with descriptions of the issue, and references to the impacted elements with screenshots highlighting the elements.

There are also some nice packages that wrap Lighthouse for easy use in test automation — for example, the cypress-audit package for Cypress worthy of a mention for it’s easy to use cy.lighthouse() command which can also be used for performance budget tests (it also has a cy.pa11y() command so you can kick off Pa11y tests if you prefer that interface instead).

Axe lessons

Having realed off and recommended a number of Axe tools, it’s worth pointing out some of the gotcha’s that I’ve experienced with them over the years.

React Axe issues

A big lesson for me has been around @axe-core/react. Initially I was very much in favour in the idea of having a tool that can instrument React and report issues to the console while running apps locally in development.

However... for any application of any size this package can massive tank performance which can make local development painful and slow — especially if you are running a hot module reloading (HMR) setup where every small tweak triggers a hot reload and a potential new Axe scan of the page. If you are using it currently and ever wonder why animations are super laggy and page updates look jarring, then React Axe is a real possibility as to why. Obviously this is my experience working on large applications, mileage may vary!

My personal recommendation is to scrap React Axe and instead reach for the Axe browser extension instead — you don’t get "live" reports to the console, but the flow is fairly natural and is a definite improvement for developer experience. The UI and level of detail you get on the extension is also far superior to the red text you get in the console scattered amongst everything else that is getting logged.


It’s also worth calling out that Axe isn’t perfect — it can occasionally have quite annoying false positives.

The most common case I’ve found is in it’s reporting of colour contrast, specifically when doing anything complex with either nested elements or using pseudo-elements.

For example, imagine you have a setup (admittedly contrived, but funnily enough I have experienced this in a side menu implementation!) where there is an element with a green background and then a number of nested transparent that cover the parent element. Above these is then say an absolutely positioned element, or CSS "Z translated" element, or perhaps an element from an entirely different part of the DOM positioned also above the green element and it’s covering children. Say this too has some nested children — the top layer of which has some white text.

The white text is above the green background, so this is WCAG AA compliant but the element containing the text is sufficiently removed from the background element that Axe fails to recognise which elements (and colours) are involved in the contrast comparison and fails with a violation.

The trick here is to keep it simple — if you need to ensure contrast just apply the appropriate background (or equivalent) to the element with the text. Yes, this is somewhat fixing a non-issue due to tooling limitations, but it is slightly less fragile to changes in your page markup implementation — in future someone might refactor to reduce the complexity of this component, and having the element satisfy the necessarily colour contrast requirements itself rather than relying on some other element will save future rework getting things compliant again in such a refactor. As the principle says, "things that move together should sit together".

There is also another lesson here — if you’re seeing false positives reported from Axe you’re probably doing something off in your markup. There are few scenarios that warrant such a complex element layout so Axe failing is probably a good hint that you’re in need of simplifying your HTML. Even if you are able to successfully satisfy the Axe requirement, it is very likely with such a layout that you will have potential issues with other accessibility requirements —what about users who make use of 2x zoom, or assistive technologies like magnifiers and screen readers? There is a real risk with something like above behaving poorly for some of these tools, e.g. screen readers are known for not handling non-semantic deeply nested markup very well.

Friction in WIP projects

Unless you’re in the rare position of starting from scratch with a greenfield project, the chances are you may well be pulling something like Axe into an existing project for a site that isn’t all that accessible at the moment, as a means to start improving the situation.

The thing to be aware of here is Axe’s limitations on ignoring violations, a capability that is often needed as a practical measure when you are looking to progressively improve a site’s accessibility which has a number of existing violations.

Of course ideally we don’t want to be ignoring anything, but a nice flow when working on such a codebase is to always create a ticket, and then add ignore rules named after or commented with the ticket reference and start ticking these off.

The frustrating thing with Axe ignores is that you can’t ignore a specific rule for a specific element. Unfortunately you can either ignore all rules for an element, or you can ignore all cases of a rule. The consequence is that often you will be over-ignoring elements or rules until you have cleared the decks of most or all violations, meaning until then you’re not really getting the protection from regression that you might hope for — folks adding new code may well introduce further violations of a similar type and Axe starts ignoring these as well!

What I recommend here is set up your tests from the start with sensible per test configuration and avoid having a single global configuration (though for DRY, it might be some common config is centralised). This way you can narrow the exclusions just for specific tests, e.g. say the contrast is only inaccessible when you toggle a button, you don’t want to ignore that button or the contrast rule for all other tests and scenarios in your suite!

Another technique that I’ve seen employed to reasonable effect is to introduce attributes to elements that you want to ignore, for example data-axe-ignore="true", and use this as a way to target elements that you need to temporarily ignore (through a selector such as [data-axe-ignore="true"]) in order to get CI up and running with Axe tests. Once you fix the violation, just remember to remove the data attribute! This does have the downside of polluting production markup unless you have steps to purge the attributes, so there are trade-offs here.

Closing notes

In this article I’ve covered off the majority of the Axe suite, if you want to find out more or explore some of Deque’s other offerings check out their site at

In part 2 of this series I will cover a number of non-Axe tools and techniques for a11y test automation. See you there! 👋

Hi, my name is Craig Morten. I am a senior product engineer at the John Lewis Partnership. When I’m not hunched over my laptop I can be found drinking excessive amounts of tea or running around in circles at my local athletics track.

The John Lewis Partnership are hiring across a range of roles. If you love web development and are excited by accessibility, performance, SEO, and all the other awesome tech in web, we would love to hear from you! See our open positions here.

Top comments (0)