Our team's resources are limited, still we need to make sure our site is accessible (government site, EU). Manual testing includes a lot of fiddling around with JAWS, sometimes other screen readers, Lunar Plus for visually constrained users etc. We would like to automate as many of those tasks as possible.

While automating semantic checks (like for attribute on labels or correct heading order) is pretty trivial with Selenium or PhantomJS, getting correct output from JAWS is a completely different matter. Is there any way to actually write a test script and verify the output of a screen reader?

2 Answers
2

Actually using a screen reader is a difficult skill to master. It is unlikely that you would be able to get a QA resource who is expert enough in using a screen reader to know whether a problem using the tool is actually a bug or not. In my experience, you would be much better off focusing on ensuring you are following best practices than by trying to simulate the use of accessibility tools.

Edit:
After reading a few of the other answers, I wanted to expand on my answer. Let's suppose you do use Jaws to test the accessibility of your web site. Let's say that Jaws has a bug, a flaw in the way it uses the accessibility data on your site. You see that there is a problem, assume it is a bug with your web site's accessibility and "fix it" for Jaws, thereby breaking it for all other screen readers. Jaws has flaws, just like any other software, especially with the HTML 5 changes being rolled out and implemented differently in different browsers. Accessibility tools are currently undergoing a lot of churn and change and will definitely have bugs that need to be addressed.

So, I will again stick with my assertion that ensuring you are following accessibility best practices is more cost efficient, more reliable and simpler.

If you need to support a screen reader for accessibility issues, make sure that the web site support is baked in early on and not bolted on later. As far as I know there are no available off the shelf tools for testing the output (computer generated words). Sometimes you really need to hear what sounds are being generated by JAWS because at times the text used can come out pretty garbled or may even sound like a different word all together.

I tested a JAWS (Job Access With Speech) implementation for a major web site frequently for well over a year. Once you get past the first few hours of listening to the computer generated voice and learn the navigation keys, JAWS testing can be quite easy and quick.

Good luck with your implementation, someone out there will appreciate it!

Jaws is one of many accessibility tools. If you test on Jaws, why not Window Eyes, Hal, etc. Where do you stop? The time and cost required to test with any of these tools is much higher than ensuring you follow guidelines and best practices for accessibility. At some point you get into testing Jaws itself (which hopefully their own QA team has already done) rather than what you are intending to test, which is that your site is accessible. I briefly used Jaws and Window-Eyes back in the early 2000's for this kind of validation and quickly realized that it wasn't benefiting anyone.
–
Sam WoodsJan 24 '13 at 0:21