Last week, I laid out the difference between lab data and field data.
In short, lab data catches the obvious stuff. Field data shows you what actually happens when real people use your product.
And here's the missing piece. A way to decide when lab data is enough and when you need to watch real users struggle through your interface.
When lab data gets you almost there
If you run a marketing sites with standard patterns, you're probably fine. Those internal tools for a handful of people who can ping you on Slack when something breaks won't get you in trouble. Automated testing will catch most problems for blog posts, documentation and basic content pages.
The bottom line is if people are just browsing and reading, and nothing you're doing is particularly weird, lab data will get you almost where you want to be. The risk is low. The patterns are predictable. You're probably not going to ruin someone's day.
When you need field data
Here's where it gets tricky. If your product has multi-step forms, dashboards, pretty much anything involving payments or healthcare. If you have complex workflows where someone has to complete several steps to get what they need. If you use custom components and fancy interactions. This is where things can quickly fall apart.
If things can break and that leads to someone not being able to do their job, pay their bill, access their medical records or apply for that thing before the deadline, automated testing isn't enough.
You need field data because no automated test can tell you if your flow makes sense. It can't tell you where people get confused or stuck. It can't tell you if your fancy interface works for everyone.
Quick risk assessment
Three questions and two minutes of your time is all this takes. You cannot answer these questions with "It depends."
- Does this block access to critical services? Healthcare, financial services, legal stuff, emergency information. If yes, you need field testing.
- Do users need to complete multi-step workflows? Forms that span multiple pages, checkout flows, account setup wizards. The more steps, the more places people get stuck. If it's more than a single interaction, test with real users.
- Does failure cost users money, time or opportunity? Missed deadlines, lost purchases, can't apply for the job, can't register for the class. If there are real consequences, you need to know it works before you can ship.
Answering yes to any of these means lab data isn't enough. You need to watch real people with real assistive tech try to use what you built.
Lab data is still great for catching the obvious stuff before you ship. It's fast, it's cheap and it keeps you from shipping broken basics.
But it can't tell you if your product actually works for the people who need it most.