I have smoke alarms in my home. I wanted some of those smart ones that notify you via an app when you're not home. Now, am I care-free now? Does it mean I leave logs burning in the fireplace and go shopping? No way! I know fire alarms don't prevent fires.
Just like QA doesn't prevent accessibility issues. It just catches them. When it's too late. By the time problems reach QA, they're expensive to fix and they often get deprioritised. Automated checks during development catch issues when code is fresh and fixes take minutes instead of days. These checks also normalise accessibility because developers see issues as part of their workflow, not as a result of a separate audit.
I know when people hear automated accessibility checks they start frowning. I've heard the complaints. They catch only a handfull of issues. They can't detect the complex flows. They're no substitute for manual testing. And I agree with all of these complaints. But they're no reason to not do them. I'd rather catch 30% of issues when fixes take a few minutes to do, rather than wait till they go through the entire dev lifecycle and then come back.
I recommend three types of automated checks, starting from easy to more complicated to set up.
1. During development
Use tools that run as plugins in your code editor and as pre-commit hooks that run before you commit code. They flag issues before code leaves the developer’s machine and are so easy to set up it's a shame they're not second nature.
2. Pull requests
Once they're done with their code, developers push it to some online repository and then create a pull request. A pull request is when a developer proposes changes to the code and asks others to review and merge them with the existing code. It's just a structured way of saying, "Here's my work, can you check it and add it to the product?"
During this process, we can insert automated checks that block merges if they find any issues. And this happens before everyone needs to even look at and review the code.
This is arguably a bit more complex to set up than local development tools. Most teams use platforms like GitHub, GitLab or Bitbucket to manage their code. All of them let you add automated checks that run every time someone opens a pull request. Your regular tests can be wired up to run on every PR.
The beauty of this stage is the timing. The developer still has full context of what they just built. They haven't moved on to the next thing. Failing checks here are quick fixes.
3. CI/CD integration
Continuous Integration (CI) means every change gets tested and merged regularly, while Continuous Delivery/Deployment (CD) means those changes are automatically released to users. CI/CD is a way of automatically building, testing and deploying code so changes can go live quickly and safely.
For accessibility, CI/CD means you can run a full automated scan every time code is about to go live. Tools like axe-core, Playwright with accessibility assertions or Lighthouse CI can spin up your app in a headless browser and scan pages for issues before a single user sees them.
This is the most powerful layer. It doesn't care how careful the developer was or whether anyone remembered to run a local check. It just runs every time and if it finds something, the deployment stops.
The tradeoff is that it's also the most expensive place to find a problem. The code has been written, reviewed and approved. Fixing something here means going back through that whole chain. Which is exactly why you want the earlier layers doing their job first.
Why would you have all three if they all do kind of the same thing? Simple. Anyone can bypass pre-commit hooks. Without the appropriate branch protection in the online code repository, anyone can force a push without going through code review. And then you have the last line of defence. With no other way to deploy code to production, unless the CI/CD pipelines are green, no issues get to your users.
Why would you not have only the last one if that's more "safe?" Simple. You want to know as soon as possible when something is wrong rather than wait until the very last minute. Things are easier and cheaper to fix closer to the source.
All three work as guardrails to deploying code that has potential accessibility issues.
They create a "fail-fast" culture where accessibility issues are treated like any other broken test. The result is that you could potentially get rid of around 60-70% of common issues, like missing form labels, low colour contrast or bad heading structure. These will never reach production.
Do they catch all errors? No. Do they rely on people doing what they're supposed to do? Yes.
They are not without their pitfalls. Automated tools give out false positives, so start in "warn-only" mode and tune rules over time. Automation finds syntax issues, not experience issues, so you need to pair with manual keyboard and screen reader testing.
The best thing to do is to make it the whole team's job and rotate who triages scan results weekly. This way you also make the invisible visible.
Don't build the perfect system. You won't be able to do it from the start. Instead, pick one tool and one place and run it daily. Fix what it finds and after 2 weeks, add the second tool. Momentum matters more than perfection.