Have you ever proudly fixed some accessibility issues only to see your test suite explode into a sea of red failing tests?
Congratulations! You've just discovered the difference between tests that check boxes and tests that actually work and give you confidence.
It's surprisingly easy to write tests that pass while your users with disabilities can't actually use your product.
Your form validation might pass every automated check while screen reader users sit there wondering why nothing happens when they submit. A test might confirm that every image has an alt attribute, fully unaware that they all say "image" or "photo." Heck, the other day I saw an image with the alt text set to "decorative image." If you don't know why that's wrong, you should take my free Six Days to an Accessible Website.
But I digress.
The real value of accessibility tests isn't in maintaining a green glow. Good tests don't just verify that accessibility exists. They check that the product works as intended for people who use it. They catch when your keyboard navigation breaks. They catch when all the links are the same and they make no sense out of context.
Failing tests are your canary in the coal mine.
When they turn red after you've made your site more accessible, they're not telling you something's wrong! They're telling you something was always wrong and you're finally looking.