Accessibility audits measure whether something could be used under specific circumstances. I think that's the wrong question.
What if we measured how often someone has to contact support because the product didn't work for them? That's a real number and it probably already lives somewhere in a spreadsheet. We're just not connecting it to accessibility.
Or error recovery time. When something breaks for a user with a keyboard, how long does it take them to get back on track versus a user with a mouse? You probably could measure that gap. It's not easy, which is why nobody's doing it.
Or task completion with assistive tech versus without. Not pass or fail. Actual time measured in number of steps. The delta between those two numbers is where the accessibility debt lives.
Or customer satisfaction scores, broken down by access method. We already run Net Promoter Score (NPS) surveys. We just never slice the data that way. I bet those numbers would be embarrassing.
The problem is we measure compliance like it's a finish line when it's not.
Audit tools count missing alt text. They don't count the accumulated small issues that make someone decide a website simply isn't for them.