Earlier this year I gave a talk on the topic of automating web accessibility (a11y) testing at the John Lewis Partnership Tech Profession Conference 2023 which I’m delighted to be sharing here in an article series format.
In Part 1 of this series I covered a number of tools from the Axe suite for static analysis of sites to find a11y violations, from framework specific packages to VSCode integrations, and sharing a few learnings with caveats and gotchas from using Axe tools.
However, this is just a small subset of available test automation tools. In this article we will start to explore some of the non-Axe tooling (though Axe will certainly pop up again!) in the wider a11y test automation space. Let’s get going!
Linting for a11y
Although some aspects of web accessibility require manual validation as they are subjective regarding the user experience, many requirements of the Web Content Accessibility Guidelines (WCAG) are for ensuring you use the correct semantic markup and associated attributes to correctly present your content to users. Such requirements are well suited to static analysis checks as they are deterministic and rule based — indeed there is such a set of official rules set out in the Accessibility Conformance Tests (ACT) specification which are used by static analysis tools such as Axe that we covered in Part 1.
"Throughout this series I will be mostly focusing on the automation side of a11y testing, but it is also key to remember the importance of manual validation and exploratory testing" — Key considerations from Part 1 still apply!
Although we can run static analysis checks in CI, as a form of push left and also early feedback, a really natural and effective place to run these checks is in code linting. What is linting if not static analysis of code / markup! We’ve already seen the VSCode Axe Linter in the previous article, let’s look at some others.
Eslint
If you are already plugged into the Eslint ecosystem, there are a number of plugins that you can reach for depending on your framework:
- React and other JSX based frameworks — eslint-plugin-jsx-a11y
- React Native — eslint-plugin-react-native-a11y
- Vue — eslint-plugin-vue-a11y and / or eslint-plugin-vuejs-accessibility
- Angular — @angular-eslint/eslint-plugin
- Lit Web Components — eslint-plugin-lit-a11y
Mileage may vary with coverage from plugin to plugin, but each comes with the same great developer experience of real time feedback as you write your code as well as all the other awesome stuff you get with using Eslint such as IDE integrations with intellisense, auto-fix, etc.
A warning from eslint-plugin-jsx-a11y in VSCode
Same as your other linting rules, you can also lean into the power of running the linting both locally, whether manually or through git hooks, as well as in CI to ensure you catch issues before you even consider raising that pull request.
Svelte
Those reading closely may have noticed that the Eslint plugin list above (which was by no means exhaustive) was missing the popular framework Svelte.
"SvelteKit strives to provide an accessible platform for your app by default. Svelte’s compile-time accessibility checks will also apply to any SvelteKit application you build." — https://kit.svelte.dev/docs/accessibility
Svelte (and SvelteKit) have a number of accessibility warnings built into the compiler itself which cover a subset of a11y rules. It’s super cool to have these built into the framework itself, but it is worth flagging that these aren’t a catch-all. Even within the limited scope of static analysis checkers, as flagged by one of the Svelte maintainers Geoff Rich, there are a number of gotchas around these compiler checks.
For example, the Svelte compiler is only able to check hard-coded values in markup. If you provide a dynamic value via a variable the Svelte compiler isn’t able to determine the possible values for that attribute and thus doesn’t yeild the desired warning.
<script>
let href = '#';
</script>
<a href={href}>More Information</a>
In this example the href="#"
should normally triggers an a11y warning from the compiler, but using the variable results in the warning being surpressed.
Other
There are a limited number of other linting options out there for a11y, some of which are paid for options:
- ember-template-lint — A linter for Ember projects that includes a11y rules.
- AccessLint — A paid for (with free trial) GitHub App Integration: "AccessLint brings automated web accessibility testing into your development workflow."
- Axe DevTools Linter — In addition to the VSCode plugin covered in Part 1, Axe also have paid for (with free trial) integrations with GitHub Actions, SonarQube, a REST API, and a CLI for their a11y linter.
Visual testing
Let’s pivot over to some ideas around a11y visual testing!
Emulated vision deficiencies
Back in Chrome 83 the Chromium team released a DevTools change that allowed you to emulate different vision deficiencies in the browser. This is accessible from the Chrome DevTools Command Palette (open DevTools and key CMD+SHIFT+P on Mac or CTRL+SHIFT+P for Windows) by searching for "Rendering", where you can then pick from six different emulated vision deficiencies.
Something that I’ve learned recently is that Chrome also boasts a Chrome DevTools Protocol (CDP) that allows you to set almost every feature you get in DevTools via an API. For example, for those using Playwright this example demonstrates how we can set up a CDP session and request to set an emulated blurred vision:
type EmulatedVisionDeficiency =
| "none"
| "blurredVision"
| "reducedContrast"
| "achromatopsia"
| "deuteranopia"
| "protanopia"
| "tritanopia";
test("visually acceptable when have blurred vision", async ({ page }) => {
const client = await page.context().newCDPSession(page);
await client.send("Emulation.setEmulatedVisionDeficiency", {
type: "blurredVision",
});
// ... visual test with Playwright / other third party lib
});
Provided your test framework supports a Chromium based browser then CDP should be available and you can make use of this capability in your based visual tests, e.g. using Playwright screenshots or Cypress image snapshots.
For example, here is emulated achromatopsia (where you can’t perceive colour) for a snapshot test on John Lewis Women’s Dresses product listing page (PLP):
Here’s the same PLP from the blurred vision test:
And finally one from the protanopia (where you can’t perceive Red light) test:
I wouldn’t recommend going overboard with running all of these variations for every test — visual tests are typically heavy and slow by nature, and fragile to browser upgrades, anti-aliasing, and other false positive issues (although this is somewhat mitigated through using third party vendors who typically have intelligent diffing algorithms).
Mileage may vary with third part integrations — if your provider performs the snapshot on the browser instance you’re running (locally or in CI) then this may well be an option, but if they do HTML + CSS snapshotting to recreate the page in different browsers on their servers then this is likely not a goer.
It could be worth considering adding to a couple to your suites for golden path smoke tests. Alternatively, it can be nice to have such tests running in production on a nightly, or weekly, where the snapshots form the basis of design review discussion opposed to a blocking gate in release.
"In the UK, more than 2 million people are living with sight loss. Of these, around 340,000 are registered as blind or partially sighted." — https://www.nhs.uk/conditions/vision-loss/
"Globally, at least 2.2 billion people have a near or distance vision impairment." — https://www.who.int/news-room/fact-sheets/detail/blindness-and-visual-impairment
"Colour blindness (colour vision deficiency, or CVD) affects approximately 1 in 12 men (8%) and 1 in 200 women in the world. In Britain this means that there are approximately 3 million colour blind people (about 4.5% of the entire population), most of whom are male. Worldwide, there are approximately 300 million people." — https://www.colourblindawareness.org/
The key is to be aware and empathetic in your design and implementation towards folks who have visual impairments, and understanding of their experience of your site. A few extra, simple tests can help bring colour (pun very much intended) to otherwise dry requirements around font-size and colour contrast which are hard to appreciate in isolation — a quick change to a base font-size in your design system could be a game change for some folks experience on your site.
Automating zoom
Another important aspect of visual accessibility is supporting higher zoom levels, typically of up to 200%. This is laid out in several success criteria in WCAG for text resize and content reflow.
"This Success Criterion helps people with low vision by letting them increase text size in content so that they can read it." — https://www.w3.org/WAI/WCAG21/Understanding/resize-text.html
Unfortunately most major frameworks don’t support APIs to instrument browser zoom at the moment. Searching around you might see references to the --force-device-scale-factor=2.0 device scale factor flag for Chrome, but this relates to pixel density and is not for zoom (can be passed to launchOptions.args array within your framework’s browser setup options if you want to try and see).
This limitation has been flagged for some testing frameworks so we can hope it might be supported in future. For now you can upvote the likes of this Playwright issue for adding an option for browser zoom.
While we wait for frameworks to catch up, the best solution that I’ve found for the time being is to simulate zoom by adding a before hook in your tests which sets the zoom style of the body. For example, this is a snippet from a Playwright visual test in which the a 200% zoom is applied through CSS:
await page.evaluate('document.body.style.zoom=2.0');
In many cases this isn’t necessarily representative of using the native browser zoom feature, nor the OS level zoom features for enlarging text, which is what the WCAG guidance is really for. Nevertheless, it somewhat provides an idea as to how your page behaves when magnified, and can be a useful addition to your golden path test suite to understand if critical functionality still works.
If anything this goes to show how important it is to keep up manual and exploratory testing when it comes to a11y, there is no way to get the coverage otherwise! When it comes to vision deficiencies we are really still quite limited on how much we can shift to automation — some smoke tests on Chrome are great, but what about other browsers? What about tablet and mobile?!
If anyone has any ideas on automation in this space I would love to hear them!
Time to get funky
In this last section I wanted to share one last tool called Funkify.
"Funkify is an extension for Chrome that helps you experience the web and interfaces through the eyes of extreme users with different abilities and disabilities." — https://www.funkify.org/
It is questionably accessibility test automation, as it is a browser extension... but I’ve decided it qualifies as something that automates customer personas to supplement your exploratory testing. It’s also just very cool so worth the share on that premise alone.
Because it is rather good unfortunately (but quite understandably) the maintainers have introduced a premium level around a lot of the functionality. But even if you just have a dabble with the free features I think it’s well worth it for expanding horizons.
The extension works well on most sites, and as you can see, provides a series of simple personas or modes that you can adopt while browsing the site. Everything from vision defects to simulated dyslexia with scrambling letters to trembling hands.
For example, here is the Waitrose groceries landing page with the Dyslexia Dani persona example where we can get a flavour of what the user experience might be like (be wary this is an example simulation, it is not necessarily representative of the experience for all folks with Dyslexia):
One option that really struck home for me is the Trembling Trevor persona which simulates how it might be like to use the site if you suffer from motor degenerative disorders such as Parkinson’s. Unfortunately Medium doesn’t support embedding videos directly to show this off, so I really encourage giving it a go with the free trial at least once! Once you’ve had a play with the extension on desktop, have a think about touch devices — ask yourself how confident are you with your site’s buttons, links, and dropdowns being usable on a small touch screen?
Closing notes
In this article I’ve covered off a few additional tools and techniques for expanding a11y coverage, exploring extensions to the likes of visual testing and how this can supplement manual and exploratory testing.
This article hasn’t been extensive by far, but hopefully a flavour of what tooling is out there. If you’re keen to explore further, these are some awesome sites where you can learn more and find other tools in the a11y automation space:
- https://www.a11yproject.com/resources/ — absolute wealth of informationa and resources, from blogs and books to tools and organisations.
- https://a11y-automation.dev/automated-tools —a comprehensive list of automated tools for a11y testing.
- https://www.w3.org/WAI/ER/tools/ — a list of evaluation tools for web accessibility as curated by W3.
Stay tuned for part 3 of this series where we will take a look at the popular "accessibility first" Testing Library framework and start to explore the emerging field of screen reader automation tooling. See you soon! 👋
Hi, my name is Craig Morten. I am a senior product engineer at the John Lewis Partnership. When I’m not hunched over my laptop I can be found drinking excessive amounts of tea or running around in circles at my local athletics track.
The John Lewis Partnership are hiring across a range of roles. If you love web development and are excited by accessibility, performance, SEO, and all the other awesome tech in web, we would love to hear from you! See our open positions here.
Top comments (0)