Accessibility testing: Numbers lie

1 minute read

Imagine two teams.

Team A runs their automated accessibility checker. 95% coverage. All tests pass. They ship on Friday in the PM feeling solid. When Monday comes along, they're greeted with loads of tickets. Screen reader users can't use their payment form any more. Keyboard users gets trapped in the checkout flow. Oops.

Team B sits at 40% coverage. That's less than half of Team A. Yet, these guys sleep fine through the weekend. They know the Friday release won't bite them in the ass on Monday. They tested keyboard navigation on their core workflows. They manually checked screen reader compatibility on checkout. They know exactly where they have gaps and they're okay with it because the gaps don't matter that much right now.

95% vs 40%. These numbers don't measure confidence. They measure how many elements your tests scanned. Not whether your product works for users with disabilities.

One team optimised for a metric. The other optimised for usability.

Guess which one actually ships accessible products?

Sent on

Did you enjoy this bite-sized message?

I send out short emails like this every day to help you gain a fresh perspective on accessibility and understand it without the jargon, so you can build more robust products that everyone can use, including people with disabilities.

You can unsubscribe in one click and I will never share your email address.