One accessibility metric that can give you early warnings is automated testing results. This one is a bit controversial, I know. Most people would have you believe that an automated tool catches less than half of accessibility issues. I'd rather have 50% fewer bugs to test for in a manual fashion, so I'm going to say automated tests for accessibility are really good for catching all the low hanging fruit.
Even so, when most people think of automated tests, they just think of the one-click scans with Axe or Wave. Not me.
So what do I mean by automated tests?
For me, automated tests are way more involved than just what the free tools will give you. For me, a test suite that you can run continuously on every deploy and every code pull request needs to:
- catch the low hanging fruit like alt text, colour contrast, etc
- test keyboard navigation
- test focus styles
- test content readability
- test forms as if a user would fill them in
You need to regularly run these tests to detect WCAG violations and help you identify accessibility issues early in your development process. When you always have front and center the number of violations and the severity of those issues, you can prioritise fixes and improve the overall accessibility of your product.
So how do you do it?
- Use automated test tools like axe to scan your website pages and flag issues like missing alt text, poor colour contrast, improper heading structures and more. Axe will categorise issues by severity: critical, serious, moderate and minor.
- Write end-to-end tests with headless browsers like Playwright or Pupeteer. They have well documented APIs to help you navigate a page using a keyboard, much like a user would. axe-core will fill in any gaps these tools might have.
- Run all these tests, including the automated tools as part of your CI process. Fail a PR on critical or serious violations. Warn on moderate and minor.
Your primary goal should be to eliminate all critical and serious WCAG violations, since these can severely impact users with disabilities. Each product release should show a reduction in the total number of issues detected. You may still have minor issues. That's okay. A downward trend in critical and serious violations signals improvement.
Even if core pages are all accessible, be careful with anything new you add. You can inadvertently introduce new violations, so be sure to write tests for each new feature.
There's one pitfall.
After all, most people cannot be all wrong. Automated tests can catch many violations, but they'll miss certain issues, like those related to user experience, logical tab order or complex interactions that need manual testing. So don’t rely solely on automation. Instead, combine it with manual audits.