AI.
If you're still reading, today's topic is Artificial Intelligence (AI) and web accessibility.
The Web Almanac 2025's findings on AI is an interesting read and it brings up more questions than answers. I was quite space constrained when writing it. I also couldn't really express my own opinions. Alas, I don't have that limitation here.
AI is a highly debated topic. Some see it as reshaping how we build websites and how we make them accessible. Others doubt the powerful new tools that have come out recently and raise plenty of uncomfortable questions.
On the one hand, AI is already used to generate image descriptions, captions and as a crutch for developers to flag and fix accessibility issues in code and content workflows. Tools like AI‑assisted alt text generators and accessibility scanners are more mainstream. Devs seem to be getting faster feedback on problems.
On the other hand, AI‑generated descriptions and captions often miss context, user intent and nuances like jokes or metaphors. Not to mention AI is usually trained on existing web code and content that often contain accessibility flaws already. So they tend to reproduce those patterns rather than do better. Garbage in, garbage out.
The almanac also brings up broader ethical issues, like data use, privacy, environmental impact and encoded bias. All this directly affects who benefits from AI‑assisted accessibility and who may be harmed or stereotyped.
However you look at it, AI is starting to play a role in how we ship websites. And implicitly accessible websites.
For product owners, the promise is intriguing. And it's worth understanding what AI can and cannot do.
On the practical side, AI-powered tools can scan sites at scale, flagging common issues like missing alt text, poor colour contrast or broken heading structures. They integrate into design workflows and pipelines. Problems come out earlier and fixes are prioritised faster. For organisations without dedicated accessibility expertise, this kind of automated guidance can be useful.
For end users, AI can power features like automatic captions or real-time text simplification and summaries.
But the limitations are significant. Automated scans catch only a fraction of accessibility barriers. AI is no different. We still rely on human judgement and testing with actual users with disabilities. Auto-generated alt text or captions are often inaccurate. And "overlay" solutions, AI-powered or not, have been widely criticised for creating as many problems as they solve.
What AI offers isn't a shortcut to compliance or inclusion. I see it more like scaffolding. It's really useful for handling repetitive work, surfacing obvious gaps and making certain things more feasible at scale. But there's still a lot of hand-holding involved, lots of manual testing and nothing beats real user feedback to get accessibility right.
The potential is there and it goes beyond checklists. It could act as a force multiplier. Or it could just be the next NFT: lots of chatter and no tangible output.
Used thoughtfully, it's a helpful tool. Relied on blindly, AI is more of a liability than anything else.
So far, what I've seen people do is rely on it blindly. This dreadful cycle of prompt, approve, prompt, approve will get us nowhere.
Hey, look what I created in an hour this morning while I had my coffee on the toilet. It's amazing what you can do with a prompt and we can ship it this afternoon. It even has a logo that pops, colourful gradients and the AI said it "meets WCAG 2.1 standards," so that must be good right?!
This. This is a more-or-less real conversation.
The thing is, code is dirt cheap. The real cost isn't in generating more lines of inaccessible code. Unless you understand what you're building, why it matters and who gets left out when you cut corners, it's you who's the robot. AI makes it incredibly easy to pump out code without thinking. And maybe that's exactly the problem.
I've watched developers treat AI like a magic button. They type into a prompt and ship whatever comes back. No testing. No questions. Just blind faith that the AI knows better.
More often than not, it doesn't. AI learns from existing code. And most existing code has no concept of accessibility. So when you ask an AI to generate a modal dialog or a custom dropdown, it's pulling from a pool of examples where maybe 10% got it right. You're essentially rolling dice and hoping you land on that 10%. Not only that, but every time you use the thing, it reinforces the idea that it does a good job. So your chances of landing in the 10% get lower by the prompt, as it were.
Worse, AI-generated code tends to be verbose and over-engineered. So much so, that it's blatantly obvious AI wrote it and no human reviewed it. Hey, at least it has lots of comments and comments are good, right?! Seriously though, this prompt-approve cycle is particularly dangerous because it creates this illusion of productivity. You're shipping features fast. The backlog's shrinking. Management's happy. But it's all shit.
How do I know it's shit? Consider these excuses.
I was tired last night, so we might need to review this code.
Or
I'm not really fluent in Go, so I think what I wrote needs more work.
These are excuses that make sense. When I hear them, what I hear is "I probably messed up, I know it, you know it, so let's work together to fix it."
Now consider this one.
I wrote this with AI, so I'm not sure how it works exactly.
This excuse makes no sense. But it's saying the same thing. I messed up and I know it. But you can't blame me. I didn't write it. With the implication that, if I did, it would have been good (as opposed to a shrug and questionable quality). Why else would anyone feel the need to say "it's written with AI," unless it is to not take responsibility for it?!
I'm not saying don't use AI to write code. I'm saying don't be a rubber stamp. If you don't understand what the code does, if you can't explain why it's accessible and test it yourself, then you're not building anything. Leave the copy pasting to monkeys. They get paid in bananas and throw shit at you when they don't like it.
I've seen devs who aren't monkeys as well. They're using AI as a starting point. As a sparing partner. They let the robot generate the boilerplate, then they jump in and read it. They test it. They refine it. They ask questions. They learn from what the AI produces and then they ship.
Ah, but that takes time. That takes knowledge. And that no longer fulfills the promise of AI. Launch faster, save money, no knowledge required.
It is however what separates code that works for everyone from code that just exists in a bubble that'll eventually pop.