Accessibility testing: The culture gap

3 minutes read

When is the last time you deployed to production on a Friday afternoon?

Right?

Never. We avoid it like the plague even if we have a "comprehensive" test suite and amazing coverage.

Why though?

Shouldn't passing tests mean all is well?

Shouldn't failing tests automatically prevent a production deployment of the bad code?

I think it's because we don't trust our tests. We don't trust that even if they all pass, we won't be pulled back to work over the weekend because users found serious blockers in production.

And we think the reason we don't trust our tests is because we don't have enough of them.

I think that's wrong.

I think there's a serious gap between having comprehensive test coverage and what those tests check. If the goal is not ship broken experiences for users, we have to start at the core of what testing means for your team.

The gap isn't about what your tests catch, but rather whether your team cares about what they're testing. Tests are supposed to be a safety net. But not if nobody trusts them.

Think about culture for a second.

When a test fails, what happens? Do you stop and investigate? Or do you disable the test and move on? When a test passes, do you believe everything works, or do they run it through another system just to be sure? Maybe first deploy to a test environment. Then to staging-1. Then staging-2. In a few days, if all went well, it reaches production.

The truth is culture determines whether tests inform decisions or just create the illusion of safety.

I've seen 90% coverage where tests are treated as obstacles. Something someone needs to do. But it's okay if no one does. In fact, it's better. And I've seen products with very little testing, but where every test is sacred. It cannot fail. People scramble when it does. And when it passes, there's no doubt nothing broke as a result. The difference isn't the percentage. It's the mentality.

Tests catch technical problems.

A function returns the wrong value. A database query fails.

Culture catches whether you're testing the things that matter. A test can pass perfectly whilst the feature breaks real user workflows. That's not a testing problem. That's a culture problem.

A lot of the times, testing gets delegated to specialists. You have a testing team whose only job is to look at some user stories and write tests to match. It becomes a box to tick. Quality stops being everyone's responsibility and becomes someone else's job. Before long, nobody actually owns whether the tests matter. They only care if they are there.

The uncomfortable truth is that most teams don't have a testing problem. They have a caring problem.

The real question you should be asking is "Are we testing what we actually care about?" Not "Do we have enough tests?"

Sent on

Did you enjoy this bite-sized message?

I send out short emails like this every day to help you gain a fresh perspective on accessibility and understand it without the jargon, so you can build more robust products that everyone can use, including people with disabilities.

You can unsubscribe in one click and I will never share your email address.