AI shifts accessibility right

3 minutes read

I've been watching developers hand off entire features to AI agents lately and it's starting to worry me.

Not because the code doesn't work. Heck, sometimes it does. And you can't beat the speed of AI, although that's not always an advantage. No, I'm worried because of what gets missed when all you do is write a prompt and have the AI do the work.

This is the pattern I keep seeing and hearing about.

The developer prompts an AI to build a feature. It does. They then prompt the same AI to review its own code. Then have it generate the tests. It all happens very fast. The code generated is in the hundreds or thousands of lines. So the developer rarely reviews everything, if anything.

They decide to finally ship it and call it a day.

Except accessibility never enters their thoughts.

The AI doesn't think to ask "can this be used with a keyboard?" or "will a screen reader make sense of this?" It builds what it's asked to build, nothing more. You might then think it's the developer's fault. They didn't write a better prompt. I wonder, what might that prompt have been? "And please make it accessible?"

This is what bothers me most. When an AI builds everything and reviews its own work, what standard is it checking against? Its training data is full of inaccessible code. No it's not, you say. Yes, yes it is! It's been trained and it's learned from a web that's largely broken for users with disabilities. So when it says the code looks good, what does that actually mean? When it writes the tests and they pass, what did those tests check? When the coverage is 100%, is 100% only a vanity metric?

We spent years trying to shift accessibility left, catching issues early, building it in from the start. Now we're watching it drift right again, pushed to the end of the cycle where it's expensive to fix and easy to deprioritise.

"We'll look at keyboard navigation later." "Let's get this out first, then worry about screen readers." Shit, I've heard it all before.

Are we teaching a new generation of developers that accessibility is something you bolt on at the end? That it's not part of "working code?" And are we reinforcing current beliefs of seasoned developers that accessibility is an afterthought?

And when something goes wrong, who's accountable? The developer who didn't write the "correct" prompt? The AI that didn't know to care? Those who trained the AI? Or the data it's been trained on?

I don't have answers. But I do think we need to talk about what we're automating away and whether we're comfortable with what's being lost.

Sent on

Did you enjoy this bite-sized message?

I send out short emails like this every day to help you gain a fresh perspective on accessibility and understand it without the jargon, so you can build more robust products that everyone can use, including people with disabilities.

You can unsubscribe in one click and I will never share your email address.