A while back I wrote a piece on what sort of things you should measure in web accessibility. Most of the times, you hear product teams looking to the audit and the Accessibility Conformance Report (ACR) to tell them how well they're doing.
I'd be lying if I said there's nothing inherently wrong with that, but I don't want to rehash that argument. What I'd rather focus on is one of my favourite accessibility metrics. I make it a point to track this number closely as it provides valuable insights into the effectiveness of the development process.
The number of new accessibility issues your latest release introduced
When you focus specifically on new issues introduced in each release, you're able to catch and address accessibility regressions quickly. It will also help you identify patterns in the types of accessibility issues you're introducing, which will inform where in the software development process you need to apply leverage. You can target training where it's needed and see the process improve from there.
It's easy to say shift-left. When the question is "how left," "all the way to the left" is usually the answer. So teams always deal with things in this abstract language and assume that the further left you go, the better it is. Like it's this universal rule. I don't think that works every time because it disregards the current situation on the ground.
The first step is to stop making things worse. And then you can start making them better. It also forces teams to work in the confines of where they are right now, instead of dreaming up some fantasy world where everything is accessible and imagining all the steps for what they might do to get there.
The advantage of this approach is that it forces you to continuously monitor for accessibility. You make sure you are at least maintaining the current standard, even if you're not improving things.
The disadvantage is that this is a controversial stance to take. Testing things in production! But show me one developer who does not test their code straight in production and I'll show you a liar. We do it all the time even when we don't publicly admit it.
So how do you track this?
One way to effectively track this metric is to use a combination of automated tools (such as axe or WAVE) and manual testing. Compare the results of these audits to identify new issues that weren't present in the previous version. Next, categorise these issues by severity and type and keep a running tally for each release. Over time, aim to reduce this number.
Be sure to ask these questions for each analysis:
- Why did this issue show up?
- How could we make sure it doesn't show up next time?
- Where is the best place to introduce a test to prevent re-occurrence?
- Who should be responsible for preventing this next release?
As product teams focus on this metric, they develop a more proactive approach to accessibility.
Cutting down the number of new accessibility issues introduced per release is a strong indicator of an improving development process. It suggests that the team considers accessibility more effectively in design and development workflows.
This improvement not only results in a more consistently accessible website, but also leads to more efficient development cycles. You'll quickly find there's less need for last-minute fixes or post-release patches. You're effectively reducing overall development costs as a by-product.
More importantly, this metric encourages continuous learning and improvement within the development team. When they look at the types of new issues that do arise, teams can identify knowledge gaps and address them through targeted training or process changes.
I consider the number of new accessibility issues a product release introduces a critical metric in informing you if you are on the right track.