Let's run through a (not-so) imaginary scenario.
Your team ships, with great fanfare, a new and improved checkout process. You've tested it thoroughly with some 20 users who all loved the new streamlined design. Champagne and bonuses all around!
Within days, complaints start to flood in. Some people can't complete the checkout at all. You scramble to blame your hosting provider and dig through the logs for any exceptions. There are none. What happened?
You ask the dev team to give it a look-see, but they dismiss it immediately without needing to look. It can't be on their end. They reluctantly agree to look anyway.
When they do, the forms aren't labeled. The buttons are all fancy icons and no text. Text wouldn't matter, because things are too small to click on anyway. So screen reader users can't complete purchases. People with motor difficulties can't hit the tiny buttons. Your "thorough" testing had accidentally excluded everyone who actually needed the site to work differently than you planned.
This is selection bias in action. And it's everywhere in user research.
Most teams recruit from the usual suspects. They post ads on social media, they talk to existing customers or they hire some outside company to recruit for them. The recruitment brief says "tech literate aged 25 to 45." They screen out anyone mentioning assistive tech because it seems "complicated." Testing then usually happens in perfect conditions that don't account for how people actually use their devices.
It's not even malicious. Convenience sampling is easier. Many researchers simply don't know where to find disabled participants. Others worry about budget constraints or technical complications.
The real cost shows though when you miss their perspectives. You end up launching something with built-in barriers that could've been spotted early. Sometimes, legal troubles follow. You might lose customers. Don't forget that one in five people has a disability. That's a massive market.
So what can you do about it?
Partner with disability organisations for recruitment. Adjust your screening questions to welcome assistive tech users. Test with screen readers and voice control. Budget for accessibility participants upfront.
You will say, but that takes time. That takes money. Ultimately, all it takes is a bit of willingness.
Willingness to have those slightly awkward conversations with procurement about why you need to spend differently.
Willingness to sit through a testing session where someone navigates your site completely differently than you expected. And to actually listen to what they're telling you.
The hardest part isn't the logistics or the budget. It's admitting that your beautifully designed study might be missing the very people who need your product to work most.
You'll think of them as outliers. As extremes. So be it.
But don't forget. When you design for the outliers and extremes, you end up improving the centre.