DEV Community

Cover image for 3 Cases When Accessibility Could Highlight Bugs and Save Money
Mikhail Rubanov for Dodo Engineering

Posted on

3 Cases When Accessibility Could Highlight Bugs and Save Money

The first thing people ask about accessibility is whether it can bring money, because there are few users there. A popular misconception is that they mark out one group (well, how many blind people are there per million?) and don’t evaluate the big picture (45% on iOS and 59% on Android include at least one accessibility setting).

After several years of deep research in accessibility, I understood another thing – if we start adapting apps for people with special needs, we learn much more about people’s interaction with the apps and this gives us cool new tools.

Today we will talk about mental accessibility problems: how strong beliefs can prevent people from placing an order, what the problem of inconsistent behavior can be and we will also search for design system boundaries. All these will be about one thing: how adaptation for the blind changes the designer’s perception, how it shows the details of the mental model in the human head and how it can highlight the errors at the early stages of development.

Example 1: delivery time

Let’s start with the brightest case when we definitely know the value in money.

Several years ago, we were making a new design for order placement page: address, time, payment methods etc. One of the design solutions was to remove all the labels from the elements – the screen would become clearer, and the data itself and its forms were enough for a person to understand what was written there. The result was roughly the following: there was a title for the whole screen, but the methods of order, the address and the delivery time had no titles, only the values.

Screenshot of checkout screen with address, ASAP button and payment type

Let’s stop at this layout and think if everything is clear in it.

The designers had a few questions about how to organize the information. For example, there was a hypothesis that people could hardly understand what the chevrons (>) in the right half of the lines meant. Even though they are everywhere in iOS, they strangely go together with the rest of the data, and that is why the lines do not really look like buttons. Then we decided to split it as follows: the chevron shows that a new page will open, and the “Use” button opens a modal window.

That’s how we released.

In a couple of years, we got feedback:

I wanted to order a pizza, but a bit later: I ordered at 12, but wanted it to be delivered at 16.

I didn’t find a button for delivery time choice and wrote a comment to my order. The courier didn’t notice the comment and delivered the pizza in an hour, at 13. But that was too early for me, the pizza will be cold at 16.

I asked the courier how I could choose later delivery time. He showed me the delivery screen with “ASAP” button.

But I didn’t need ASAP, I needed **later!

Fix it.

What conclusion can you make from this feedback? At that time our conclusion was the following: that person did not understand that the delivery time could be changed. What was the solution? Since he did not understand how to change it, the problem was in the chevrons, and it we had to make it clearer. That’s what we had:

Screenshot of delivery time button with label ASAP and change button inside

We started, tested, didn’t notice much difference. So it remained like that.

Mental model

And this is how this case looked from accessibility perspective and could influence the business.

But let’s begin with some theory. Actually, we do not peer into the interface when we use it: we scan the page, roughly understand what is shown and try to complete our task. To fulfil the goal, we use the model which is already formed in our head.

In our example, the person “read” the interface differently than the designer had intended. The user later referred to his interface model. Mental model is so strong that even if you understand that something is wrong, you won’t be able to quickly redesign your mental model. That’s what happened: the person knew he could choose the delivery time, but he didn’t find the button which was right in front of him.

It would be nice to work not only with how people see the interface, but also with the model of how they perceive it. But how? The answer is simple – change the perception channel and then look what changes in the interface perception.

Schemes shows how visual and audio perception leads to same representation in mind

Screen Reader

We were working on screen reader support and needed to adapt all the labels and buttons on the screen. Basically, we had to give a name, a value and the type of element, and the smartphone would generate speech itself to describe the interface to the blind. The best thing about the screen reader is that you don’t have to change the existing graphic interface – with the right text descriptions you can easily comprehend it by ear.  And this works because we can build a mental model in the human head both visually and by ear, and text description is enough to understand how the interface works.

For a simple description, three attributes are enough: a name, a value and a label. But there can be more:

Full diagram of element's description for VoiceOver. Container's name, selected, label, value, dimmed, trait, hint

Now let’s form the description of our controls. Usually, there’s nothing to add and we take the text right from the interface, but in this case it would be complete nonsense: ASAP, 15 min, button.

for this UI…

Screenshot of delivery time with ASAP label

…we have that description

Screenshot from VoiceOver Designer app that describes selected VoiceOver's settings. Label = ASAP, value = 15 min

At this stage, we can go two ways:

  • to write a special text for the screen reader

  • to redesign the interface

It was too late to redesign. That’s why we only made correct labels for the interface for the screen reader: Delivery time: ASAP, ~15 min, button.

Screenshot from VoiceOver Designer app that describes selected VoiceOver's settings. Label = Delivery time, value = ASAP, 15 min

With this wording, a person gets the correct mental model and there can be no more problems like our client wrote about.

Let’s have a look again at our attempt to fix the client’s problem by changing the chevron into the button:

Screenshot where we replaced disclosure indicator on the right by change button

Has it changed anything in mental model? No. The client clearly understood that he could press control, but the problem was that he did not understand the result of it correctly.

It’s interesting to look at Apple’s standard position of the elements for this cell: the name is written to the left and large, and the value is next to the chevron. By the proximity of the chevron and the value, it becomes clear that it is the value that is changing. In this version, the interface is consistent with the description for the screen reader (and it can even correctly substitute the values in the standard control)

Screenshot of standard layout for cell: label on the left, value on the right near disclosure indicator

This could be the end of our case about the mental models and their persistence, but we had an unusual experiment: the designer suggested a time selection option in the form of a horizontal carousel, and the business tested it with an A/B test.

Redesigned element for delivery time: 4 buttons are placed in a row and is abled to press directly. Buttons: ASAP, and 3 buttons with time interval of half an hour

Is this design better from the graphical point of view? Much better: there’s the name, you can see the current variant and the alternatives. Moreover, you can change it with one tap without changing the screen.

To the blind it would sound like this: Delivery time, ASAP, 15 min. Adjustable.

Screenshot of VoiceOver Designer app that setup the element like adjustable control

So, as a result, the way the interface sounds to the blind and the way it looks graphically are completely consistent, which minimizes the difference between the visual representation and the mental model.

Test

We have tested this design on Android and have got a slight increase of conversion. On the scale of 900 pizzerias, this gives 100 thousand dollars or 100 dollars per pizzeria monthly. It's not much, but who will be against receiving 2400 dollars a year just because of the visual part of some component in an app? And the money comes out of thin air from the franchisees' perspective.

As a result, adapting the app for the blind could become a good tool for working with the mental model to highlight the difference between the visual design and how a person perceives the information. Then we could have noticed the contradiction earlier, we could have thought more thoroughly about the component’s design and we could have been earning more money for two years already.

In the end, ordinary users would have made fewer mistakes, and the adaptation for the blind would have become even easier, because all the text is already in the interface, all you need is to transmit it correctly to the screen reader.


Example 2: bonus program and tap area**

The second case is about trying to make it convenient for a narrow circle of people.

We have a cell with a product in the menu. It is simple at first glance: the name, the description and the price.

Screenshot of a cell with pizza image, name, description and pizza's price

From the interactive side, everything looks simple, too: a tap on the entire cell opens a card with the product description.

Then we got an idea to make it convenient for the advanced users: a tap on the button with the price learnt to add the product to the cart, without opening the product card. Wasn’t it great? It was, because it became easier to add drinks, snacks and everything that had no additional settings.

The problems will be visible if you highlight interaction areas:

  • the tap area for the price is rather small
  • under the button, there is a place which opens the card without adding the product to the cart

Screenshot of a cell with pizza that shows tappable area of price button: it's very small

What makes things worse is that we do not explain the difference in behavior in any way: I’m sure there are cases when people tap on the cell, but accidentally click the button, and “for some reason” the product is immediately added to the cart. There is no mental model for such a change in behavior.

Is it possible to make this behavior one-valued? Yes, for example, in our other apps, the button is in the corner, with a bigger contrast and with a plus catching our eye.

Screenshot of application with coffee catalog and plus button in cell's corner

It was even funnier in the bonus program. We have made a screen with a choice of goods you can buy for the bonuses you have collected. The design of the cell is the same, so we decided to keep the behavior: a tap on the price adds the product to the cart, and a tap on the cell … does nothing, because it usually opens the product card, but there’s no card here. Is it logical? In general, yes, but the users still don’t know about it.

As a result, we have these tiny tap areas on the screen. The problem is obvious only because I have highlighted the buttons; from a regular screenshot, you can never guess about it.

Screenshot of screen with a list of bonus products and small tappable areas on price buttons

In the end, a number of people can get problems:

  • sometimes a card opens and sometimes a product is added;
  • bonus program does not respond to taps at all, because it is not obvious that it is the button that needs to be clicked on here;
  • some people can hardly aim at the tiny price button.

The screen reader is also broken: it focuses on the entire card and reads its description correctly, but it can’t tap on, because normally the tap goes to the center of the element, and the button is “a bit” lower.

How should it be laid out?  The click area should be to the full width of the screen and as high as possible:

Same screenshot of screen with a list of bonus products and tappable areas in whole area of each cell

The behavior between screens needs to be aligned: a tap on the card should always perform the target action without unnecessary flirting with a quick adding to the cart: people don’t order pizza every day, they don’t need this high speed.

Then the cells will be large enough for the blind, it will be easy for people with tremor to get into them, the behavior will be the same throughout the entire app, which will not hinder people with mental disorders. And it will be clearer for ordinary people, too.

Screen reader

Again, the adaptation for the screen reader shows new details. By ear, we would like to know the name and the price first, and only then to listen about the ingredients.

Screenshot of VoiceOver Designer with cell's description Tonno, from 11.9 euros and description in the end

This problem is also graphically seen, because all design tricks are here: a bold title, a faded description and a colored button are trying to change the order of how we look at the interface in order to change the order of reading. Besides, people with poor eyesight cannot read the faded text.

Cell's screenshot that describe visual reordering of reading: title, price and description

In general, it's OK for the graphic interface to work with order this way, but trying to get rid of this problem can lead to interesting questions. For example, is the description necessary if we have made it invisible by all means? And if we go without it, can we make the pictures larger? Will it affect the conversion? Will we earn more money this way?


Example 3: the boundaries of the design system

Once we discussed our plans for the design system and how we look at it. The result was this chain from small elements to large ones:

  • colors, fonts, controls

  • cells, cards, screens

  • mood and values.

With the first point, everything is clear and everyone knows how to do it. We decided not to dive deep into the values, so the question about cells arose, from the previous examples: are product cells a part of our design system?

On the one hand, they are, because the same elements are found in the menu, product search, bonus program. On the other hand, I want to have a possibility to customize something on the spot, just in case. The discussion has hit a wall, each side had its own arguments. But if you dig to the values and look from the side of accessibility, everything will become quite simple. The menu cell supports DynamicType, VoiceOver, Voice Control, Switch Control, and a whole bunch of business tasks: personal pricing, product availability, focusing on the right products, support for Arabic writing from right to left, and so on. That's 24 states at minimum, a decent amount of work:

Screenshot of files in finder that represents different variants of product's cell

So, we decided to make large cells with an emphasis on goods. That's over 30 more states to make it work just as well.

Another screenshot of files in finder with differrent layout

From this angle, another question arose: for a sudden desire to draw a button a little differently, do we want to give up all these possibilities? Or even worse: to duplicate them in another component and then maintain this oO?

In the end, my suggestion is as follows: focus on the main scenarios, invest in them, and let the additional features be additional: build them on the basic components so as not to add special support. We’d better spend the saved time on the main scenarios, for example, on adding support to the accessibility.

Maintaining accessibility at the design system level is very convenient: you do it once, and the correct work is immediately ensured throughout the entire app. But in order to reach the effect, you need to move further in what we perceive as a single component. Thus, different cards, modals, pop-up toasts, and even entire scripts may have additional requirements, and the boundaries of their applicability will be dictated by accessibility requirements.

VoiceOver Designer

All three cases have a common base: accessibility needs to be dealt with as early as possible. It will be able to give the designer an additional tool, a special view, a better understanding of the result, and the ability to distract from the visual.

But there are were no such tools. To help the designer immerse himself in the topic, I’ve written the VoiceOver Designer app: it allows you to mark up the description for the screen reader on top of the screenshot, run the mockup on an iPhone and listen to the result.

Screenshot of the application VoiceOver designer. Screenshot of screen with pizza inside the app

Almost all properties that can be set to an element are available in the interface, which sets the restrictions and suggests what result we can get.

To make it easier to dive into the topic, I have collected a dozen of examples of adaptation: you can see what settings are selected on them, how the screens are marked up and how the blind interact with them.

Screenshot of the application VoiceOver designer with samples of the screen

You can learn more about the app and examples of adaptation on the website - https://rubanov.dev/voice-over-designer

Oh yes, I also wrote a book - About iOS Accessibility. The book is free. It’s in Russian, but all the code examples and illustrations can be understood without translation. To make the book accessible to anyone, we are working on translation. Follow us so as not to miss the release of the book chapters in English.

To discover more about Dodo IS and top QSR innovations in Dodo Brands, follow us on LinkedIn and Medium!

Top comments (4)

Collapse
 
greggcbs profile image
GreggHume

Very elaborate write up! In South Africa we have people who speak many different languages where English is not their first language. And many different cultures.

Doing UI here is interesting but also more simple. We have to choose our words and buttons carefully. In most cases I will opt for a word over an icon in a button. We also do deliveries and a word like "Recipient Name" is too complicated for someones english which is not good... is recipient me or them.. or what is a recipient? most people think.. so words like that we have to change to "Person the delivery is for" or something along those lines.

I try make UI intuitive but easy to understand and I supposed your screen reader approach helps with that. I always say "make it easy enough for old people to understand" and its helped me make some good UI choices.

Thanks for your write up!

Collapse
 
akaduality profile image
Mikhail Rubanov

Thank you! Language accessibility is great topic.

Currently I an in Spanish-speaking country and have a lot of samples and solutions for language barriers:

  • every translation app should support dynamic type to show translation to another person on a distance
  • numbers are hard to perceive by ear. A screen with order’s total oriented towards the customer simplifies a lot (and useful for deaf people too)
  • cool when cashiers at coffeeshops knew simple english words like “medium” or “take away”. Also it’s the first words that I started to learn
Collapse
 
priteshusadadiya profile image
Pritesh Usadadiya

[[..Pingback..]]
This article was curated as a part of #89th Issue of Software Testing Notes Newsletter.
Web: softwaretestingnotes.com

Collapse
 
akaduality profile image
Mikhail Rubanov

Thank you!