I've made the argument previously that metrics in accessibility should be your team's decision. Adopting some metrics just because someone else said so, or because they worked for some other organisation won't lead to anything worthwhile. You need to pick metrics that fit into your process and how you work. Adapt metrics to your process rather than your process to the metrics.
Here are some common pitfalls to avoid when deciding what accessibility KPIs to track.
Adopting metrics because they're the industry standard
Working towards 100% WCAG compliance is commendable, but not necessarily a good metric. For one thing, compliance is a lagging indicator. You'll know how you're doing way too late. More importantly, the guidelines are a baseline, not a goal. I think some teams might hide behind compliance requirements and shot themselves in the foot because they don't make meaningful progress.
Setting metrics as a goal
When a measure becomes a target, it ceases to be a good measure. Goodhart's law
Settings metrics as a goal is also a great way to open the door to people who will try to game the metrics and ignore any real progress towards an accessible product.
Comparing your metrics to another organisation's or an industry average
The only comparison that makes sense is against your past self. The goal is to improve your product's accessibility by identifying area for growth, not to compete against others.
Making one person responsible
Giving one person (or team) sole responsibility over your accessibility metrics will ultimately lead to blaming and finger pointing. Instead, make everyone responsible and share all accessibility metrics across development, design, testing, marketing and management.
Over to you!
What pitfalls have you encountered?