I previously talked about measuring KPIs for accessibility and gave an example of one metric I like to track: the successful task completion rate for users with disabilities. This metric tells you how well your product supports users in achieving their goals, like filling out a form for example.
Since this measures real user experiences, I find it more important to track than, say, how many WCAG success criteria you tick. And since you're dealing with real users, the first thing you need is to set up usability tests.
If you want relevant results, then one important criteria is selecting participants. I like to make sure the users I test with will reflect a variety of needs the product should accommodate, such as visual, hearing, motor and cognitive disabilities. I also want to include a diverse set of assistive technologies like screen readers, keyboard navigation or voice commands that reflect real-world usage.
I measure the percentage of users who can finish the tasks without errors and without assistance from me. As an added bonus, I can compare this number with the overall user base and see if there's any gap. If the gap is consistent, then it means the product doesn't properly account for users with disabilities. They are struggling more, they are more frustrated and it's a clear sign we need to improve.
Of course, it's not just the number that matters. When I look back at the sessions and my notes, I combine the numbers with qualitative feedback to better understand the barriers some face. This is really where the value is since this can help me prioritise accessibility fixes for areas that need serious improvement.
Let's say I want to measure the successful task completion rate for users with visual impairments using a screen reader to fill out an online registration form. Here's what I would do and what I would look at.
- Introduction. Talk to the participant, put them at ease letting them know I'm not testing them, but the form. I won't give specific instructions on how to use the form.
- Execution. Have them navigate the form using their screen reader. They should be able to enter text, select options and submit the form without guidance.
- Observation. Monitor how they use the form, paying attention to:
- Whether they can identify and navigate to each form field
- How easily they can understand the labels, instructions and error messages
- If they have any difficulties in selecting options from dropdowns
- Whether they can identify required fields and successfully submit the form
- Questions. In my notes I write down things like:
- Did the user successfully complete the form and submit it without assistance? This is my primary metric.
- How long did it take to complete the task? Significant delays might indicate usability issues.
- Were there any points where they got stuck or made mistakes? These are areas that need improvement.
If users with screen readers have a low success rate, I want to prioritise fixing the identified issues during the tests. And then retest to ensure I did the right things.
Is it difficult to track?
Yes! It's one of the more complicated metrics to track because of the logistics involved with user testing.
Is it worth it?
Heck yes it is! The data from these tests will help me prioritise accessibility fixes, track progress over time and ensure that everyone can use the product effectively.