Accessibility testing: Which team actually ships accessible products?

3 minutes read

95% vs 40% coverage. If this is all you knew about the testing coverage of our two teams from last time, could you answer which team actually ships accessible products?

I doubt it.

Test coverage is basically the measure of how much of your code is being ran by your test suite. It's like checking "did we write tests that exercise this code or not?"

When you run coverage tools, they track which lines, branches and functions in your code got executed during your tests. So if you have a function with 100 lines and your tests only run 60 of those lines, you've got 60% line coverage. Simple as that. The tool watches which parts of your code are touched and which parts are ignored.

Now, what do the numbers actually mean?

A lot of teams aim for somewhere in the 70-80% range. That's generally considered pretty solid. It's high enough that you're catching most things and it acknowledges that hitting 100% is often a waste of time. This is because testing edge cases that almost never happen is not that useful.

The thing to understand is that coverage is a useful metric but it's not gospel. You can have 95% coverage and still have bugs. I'm not even considering the very real fact that plenty of tests are garbage.

Coverage is also different than passing or failing tests. You can have 100% coverage and 50% of your tests fail.

The real value of coverage is using it as a sanity check. If you've got 0% coverage on certain parts of your code you know matter, that's a big red flag.

If you accept all this, then the big question that follows is how do you tell which parts of your code matter? And which you can ignore. By answering this, you can answer the question I asked initially. Which team ships accessible products?

And the answer is...

..............................................................................................................................

....................................................................

some drum roll would be nice

....................................

..............

The one that put in the work to understand their users, how they use the product and then made intentional decisions about where to spend the effort. The one that accepted that they can't test everything and made peace with that.

Why?

Because they stopped measuring coverage and started measuring confidence. Instead of asking "how many elements passed the scan" essentially, they ask "how certain are we that a keyboard user can use this?"

95%. 40%. These are numbers.

Who uses this? What do they need to do? Where would failure hurt the most?

Answer these questions and make sure you cover these use cases in your tests. This is the decision that matters.

Sent on

Did you enjoy this bite-sized message?

I send out short emails like this every day to help you gain a fresh perspective on accessibility and understand it without the jargon, so you can build more robust products that everyone can use, including people with disabilities.

You can unsubscribe in one click and I will never share your email address.