Real talk: Green tests and red flags

1 minute read

Teams treat test suites like some kind of oracle.

When they're green, they have an orgasm. "The tests are green, praise be! Ship it!"

Never mind that the tests are not that thorough.

Never mind that accessibility is forgotten yet again.

Never mind that form validation only checks if it submits, not whether users can figure out what the hell went wrong when they inevitably cock it up.

Never mind that you only test in Chrome.

Meanwhile, real people can't use the site properly. You've built an entire culture around worshipping false positives while real people go "what the hell!"

Despite what you might think, green doesn't mean good. It just means what you tested under those specific conditions, works.

If you panic when tests suddenly turn red after fixing actual accessibility issues, you're definitely doing it wrong. And if you think you should revert those fixes just to get back to green, don't talk to me. Honestly, nothing says quality engineering like deliberately making your product worse to satisfy dodgy metrics.

Ship on green is the tail wagging the dog. Instead of tests serving users, users are getting screwed over to serve your tests.

Green tests might just mean red faces when actual people try to use your product.

Sent on

Did you enjoy this bite-sized message?

I send out short emails like this every day to help you gain a fresh perspective on accessibility and understand it without the jargon, so you can build more robust products that everyone can use, including people with disabilities.

You can unsubscribe in one click and I will never share your email address.