Accessibility testing: Beyond coverage

3 minutes read

I used to chase accessibility testing coverage the same way others chase code coverage.

I'd run the automated checker and log everything, including best practices. I made no difference between critical and serious. I mean, you want everything to work for everyone all of the time, right? There's no point in splitting hairs then. It's either everything works, or none of it does. So I always aimed for 100% coverage.

That's wrong. Eh, I grew up a bit.

100% coverage feels like safety but it's often an illusion.

It doesn't really matter if I have 100%, 95% or 20% coverage. If a screen reader user can't fill in the checkout form or I trap someone using a keyboard in a focus loop, those coverage numbers don't matter. The metrics can look good without the product working.

You can hit high coverage by testing trivial paths whilst missing critical user journeys.

The trap is obvious once you see it.

Coverage metrics count how many elements you tested, not whether you tested the things that actually break for disabled users. Coverage tells you what you tested, not so much whether it matters at all.

An automated tool can scan every heading, every image alt text, every colour contrast ratio. It could flag hundreds of issues. Logging all of them without any sense of priority is just nonsense.

You still won't know if your checkout flow is usable for someone using a keyboard. It won't catch that your dropdown is technically keyboard-accessible but meaningless to navigate. You can pass every automated check and still exclude people from using your product.

Maybe it's time we start asking different questions than "what's our coverage?"

"Did we test everything?" becomes "Did we test what actually matters?" That means understanding where users with disabilities hit problems. What workflows are most critical? Where do assistive technologies break down? Where does the product assume a particular way of interacting?

Test these things ruthlessly. Ignore the rest. Or at least put them in the nice-to-have bucket.

Maybe it's time we figure out what's actually worth testing.

What are the workflows that matter most? Can someone complete core tasks using only a keyboard? Does a screen reader user get meaningful information or just noise? Can someone with low vision read the content?

A product with 60% accessibility coverage but deliberate testing of critical paths will serve users with disabilities better than one with 90% coverage that ignored the things people actually do.

Would you rather pass automated checks or have a product people can actually use?

Sent on

Did you enjoy this bite-sized message?

I send out short emails like this every day to help you gain a fresh perspective on accessibility and understand it without the jargon, so you can build more robust products that everyone can use, including people with disabilities.

You can unsubscribe in one click and I will never share your email address.