DEV Community

Bryce Miller
Bryce Miller

Posted on • Edited on

That Time a Usability Test Made me Want the Earth to Open up and Swallow me Whole

The plan was simple: have a blind user complete a usability test on a WCAG AA-compliant feature and bask in the spontaneous applause emanating from our stunned co-workers. Of course, it didn't quite turn out that way...

Product Under Test

I was working for a company that to their great credit has very much committed to making sure that its products are accessible. Accessibility is something that is an integral part of their process, from the first design draft, through implementation, testing, and deployment. If the feature doesn't meet or exceed the WCAG AA benchmark, it isn't released.

Business Case

The company arranged for a blind person to come to the office and take part in a usability test. This was obviously a rare and valuable opportunity for developers, testers, and designers to see first-hand exactly how people with reduced vision interact with our website. If the company you work for has never run a usability test with someone who has a disability, reach out to your local interest group and invite them to send someone along. They will be thrilled to be involved! You are making your website accessible specifically for these users, so do include them and listen to what they have to say.

Test Objectives

Each development team was given the opportunity to usability test one feature. The team could choose which of their own features they wanted to test, so of course we chose our most recently released feature.

This feature had a content panel taking up most of the available screen, with an information panel on the right. It was fully tested, met the WCAG AA standard, and even implemented some of the AAA requirements. It had jumped through all the hoops, and was definitely accessible. All we had to do was sit back and enjoy the ovation.

Participants

Our usability tester arrived at the office. As is standard practice at a lot of tech companies, we developers were kept apart from our guest, just in case she found our eccentricity unnerving.

She was taken into a room where she could speak with an approachable designer and friendly usability test-runner (i.e., people with people skills). Meanwhile, in the big boardroom with the big table, all the fancy chairs and the left-over sandwiches, a swarm of developers, testers, and scrum masters were plugging in their laptops with cheetos-covered fingers. Remote teams were dialling in from their own basement locations abroad. I was seated in this room.

The atmosphere was electric. The anticipation was palpable. The air was stale, somebody open the door. The test was about to begin.

Equipment

The test would be carried out on a computer in the first room - a nice, relaxing environment without the stress of two dozen strangers silently judging you. The screen would be shared, and projected onto a giant display in the room with all the developers, testers, and scrum-masters. An audio-link was set up so that we could hear the test being run.

The usability tester had brought her own special piece of equipment - a braille display. It looked like a Kindle/e-reader and would take the html and convert it into rows of braille text, which a skilled person could read using their fingers. In the boardroom, the developers and testers all exchanged glances. We had never seen anything like this before. We were not prepared for this.

Test Tasks

The test got off to a lousy start.

  1. Log into the system - FAIL
  2. Navigate to the given course - FAIL
  3. Find the correct assignment - FAIL

The boardroom was completely silent. A wave of embarrassment lapped around the fancy chairs. She hadn't even tested my team's feature yet, and I was already experiencing second-hand shame. The left-over sandwiches wilted with mortification.

I looked at my team-mates and saw them trying to blend into their chairs. Our feature was next. Would it stand up to the tsunami of basic accessibility needs? No. No it would not.

4. Can the assignment be handed in after the deadline? - FAIL

The answer was in the right-hand panel. The feature was WCAG-AA compliant. We had tested it with a screen-reader! We had tested it with two screen-readers! But we still failed. The usability tester couldn't find the information - she couldn't even find the right-hand panel. She'd gotten to the end of the main pane, and just stopped.

I was horrified. I was humiliated. I had humiliated myself. I was confronted, live and in person, by the fact that I had let every single blind user of our system down. And that I had been so cocky about it - it was nothing but hubris.

Debrief

The usability tester was nice about it, and graciously blamed her braille display and the screen-reader, but we all knew that that was just the kindness in her. In reality, the site should work with the braille display, and the screen-reader - it's our responsibility to ensure this.

Ultimately, the problem wasn't that the feature wasn't compliant. We had ticked all the boxes; we had jumped through all the hoops. The problem was (and still is) that ticking all the boxes isn't enough. Testing with a screen-reader isn't enough. Making an interface and then tacking on accessibility isn't enough. We should have thought about how someone with accessibility needs would want to navigate the feature, which information is important to them, where that information should be and how it should be presented. Putting important information on a right-hand panel makes sense for sighted users, but it's at the end of the page, and utterly invisible to blind users.

Think more about the user experience of people with accessibility needs and include them in your development process. That way, you won't need to feel like I did that time a usability test made me want the Earth to open up and swallow me whole.

Top comments (1)

Collapse
 
priteshusadadiya profile image
Pritesh Usadadiya

[[..PingBack...]]
This article was curated as a part of 44th Issue of software Testing Notes.