The Quagmire of Web Accessibility

The other day, a comment posted to BC pointed out that I had side stepped a question about automated testing tools that could simulate a specific screen reader.  This caused me to think about automated test tools and the problem of screen readers in general.

Matt May, one of the world’s top document accessibility experts corrected me in a comment that said, “Sorry, Chris, but with respect to developer costs, I don’t think you really addressed my comment at all. What I wrote was not a defense of discounted versions of ATs, but a call for AT simulators designed for developers.”

I apologize for misinterpreting the question the first time.  I agree that an automated AT test tool that a developer, web or application, could use to check their work against a screen reader or other AT product would be a terrific idea.

A screen reader like JAWS or ORCA which are based on built in scripting languages could probably be modified to create a testing tool as the sorts of techniques that a screen reader employs to send data to its users are actually pretty similar to the techniques in programs like RoboTest and other generic testing tools.

As the source code to Orca is available to the world, one might be able to find grant money to develop a test tool based on it for checking gnome applications, Internet and media accessibility.  The overhead necessary to write a proper test harness and the requirement to remain faithful to it might cause a problem for small time developers though.  To write an automated test script for a web site would require a lot of, “the AT should say when it encounters .”  If the tests written into the script come up as failing, you will then need to determine if the problem lies in your web document, your test script or the automated testing tool as a bug might exist anywhere.

If we can get past the effort to build an extensive test script, which will either require a ton of typing or a really, really smart testing tool, we then need to worry about remaining faithful to it.  In the case of web tests, one would need to update the test script every time they changed the web page, for an application, they could either build in “invasive” test code that can “talk” to external testing tools or write an external script but, either way, the programmers will need to remain faithful to the test framework to ensure the integrity of its output.

Someone could probably write a really ugly JAWS script today that, using the functions designed to work in its virtual buffer could walk through a web site and dump out the text it would say to a file.  As only one person (Ben Key from FS) has, to my knowledge, ever got the JAWS scripting language to support recursion and he did this to prove he could do it, I’m not sure if it has ever seen practical use and writing a tool that would hit all of the possibilities on a web site would be really hard without recursion.

Other issues of remaining “faithful” to an automated test tool that simulates a particular AT would require that the AT manufacturer and the team who builds the test tool work pretty closely together.  In the case of using the script based screen readers, the AT hacker would need to communicate any change to a scripting function (subtle or profound) to the tool developer who would then need to make appropriate changes.  Also, internal changes to said AT may have unexpected outcomes which could fool a testing tool pretty easily.  In fact, a bug fix in a screen reader could break a work-around employed in a test tool or screen reader script as one may have written their tool in a fashion that depended on a specific bug.

Back in the wild days of DOS, Borland released Turbo Assembler (TASM) to compete with Microsoft’s MASM.  One of the cool TASM features let one set a flag to “simulate MASM bugs” and served the valuable purpose of letting those of us hackers who exploited MASM bugs to make our programs do cool things.  Microsoft programmers, in one of the MS DOS releases, fixed a bug that many of us who wrote device drivers for oddball hardware used to make some tasks more convenient and, hence, broke our software.  People like me who used Ray Duncan’s “Advanced MS-DOS” and “Undocumented DOS” as hacking bibles, found that such “bug fixes” annoyed us as often the proper way to do things took more instructions and, back in the days when we still counted bits and bytes, every instruction counted.

Also from modern hacking history, Philippe Kahn and another hacker in the early days of Borland, discovered the terminate and stay resident (TSR) interrupt which MS had included in early versions of DOS without documentation so their programmers could perform some debugging tasks.  PK and his gang released Sidekick, one of the most popular DOS programs ever which forced Microsoft to document the TSR functionality so it wouldn’t break the increasingly large number of TSR programs that hit the market all at once.

A few old timers will, of course, remember that Vocal-Eyes, JAWS for DOS and the other DOS screen readers all used the TSR hook to simulate multitasking on a single thread operating system.  Both MS Word and WordPerfect for DOS (up through release 5.5 of each) actually included special “screen reader” interrupts that would only work while they were running.  I learned of these hooks long before I found my way into the AT biz when I wrote a TSR for “Right Writer,” a DOS grammar checker, that with a single keystroke, would figure out which word processor you were using, force it to save the document with a slightly different name, run it through the grammar checker and then load the marked up version of the document back into the word processor.  In many ways, I used techniques identical to a DOS screen reader in this TSR but have never actually looked at a single line of DOS screen reader code.

Returning to the topic at hand, how can we create a screen reader simulator or, more aptly, a suite thereof that web and other developers can use to learn how specific AT products will work with a particular task?

Matt continued, “In any case, though, simply reading and following web standards is not sufficient, as developers who have experience doing that know that the ATs don’t
adequately support web standards.”  I know FS has (or had) a document called “The HTML Challenge” on its web site which includes as many html possibilities we could think of coded to the WAI guidelines that a person can download and try out with JAWS and other web access utilities.  I can’t remember the person’s name (he is Canadian if I remember correctly) who distributed a spreadsheet of web guidelines versus various versions of JAWS, Window-Eyes and HPR at some recent CSUN conferences.  I probably still have a copy of one of these reports (2004 maybe) which showed that JAWS and HPR came pretty close to meeting or exceeding 90% of the tests he ran for his research.  Window-Eyes, in the last of these reports that I read, fell behind IBM and FS but they have made many improvements since and, to my knowledge, no one has done a side by side test that includes JAWS 7.0, WE 5.5 and the most recent HPR.  As far as I know, all of these tests used Internet Explorer and a Windows OS which would leave out any of the GNU/Linux or Macintosh solutions.

I agree with Matt’s assertion that pushing AT and web developers alike toward the standards has felt a bit like offering root canals as party favors.  Coming from the AT side, I can attest to the fact that the JAWS team, including Glen, Eric and me, always pushed people toward WAI guidelines when we received questions about how to make their web sites accessible with JAWS.  We did our best to comply with the guidelines and, if you refer to the reports published at CSUN (by the author whose name escapes me this morning) you will find that JAWS and HPR, in each subsequent release, came closer to meeting 100% of the test.

While at FS, I would often receive phone calls from a random web developer asking what they needed to do to work with JAWS.  I would send them to the guidelines.  Inevitably, they would call back asking if we could do something as a work around because they didn’t want to take on the cost of recoding to the guidelines.  In some of these cases, depending upon the popularity of the web site and the perceived value to our users, we might throw in some exception code to handle their issue.  I know we did a lot of coding to work around all kinds of broken html that would cause things to crash otherwise.  So, the web standards and guidelines battle hasn’t been a walk in the park for the AT people either.

So, where does this leave us?

The problem as stated by Matt May and others is that different AT products will support the guidelines in different ways and may or may not support the entire collection of them.  A solution to this problem is far more elusive than anyone involved in the document accessibility wars over the past decade ever expected.  A solution needs to account for at least five screen access tools (JAWS, HPR, Window-Eyes, HAL, ORCA, VoiceOver), at least three browsers (IE, Firefox and Safari) and at least three operating environments (Windows, gnome, Macintosh).  For each screen access tool, a test needs to be aware of its unique user interface as, often, the UI causes one to choose one tool over another.  Finally, it needs to be aware of how each of these user interfaces will expose any given object described in the guidelines.

Even with the source code to ORCA, making it run “silently” in a scripted test mode is a non-trivial task.  Without the source code to a screen reader, I don’t think this project would be possible or, if it is, it will require software convolutions and far more time to perform than would ever be cost effective.

Thus, for the Macintosh and Windows based screen readers, their authors would need to take their base products and create testing versions of them.  They would need to agree on a common test vocabulary so people who want to test against any of these different variations will only need to write a single test script to be run against the different reference platforms.  One would also need to build a method for the test versions to properly communicate with the various web browsers in a batch system.  Finally, to meet all of Matt’s requirements, all of these will need to be packaged up as web objects which a developer can lease time on.

Certainly, one would need to find a large funding source to make all of this happen and one would need to find someone or some group (ATIA?) to coordinate the effort to ensure the aforementioned compatibility between the flavors of the test tool.  As getting AT companies to cooperate on anything is a lot like herding cats, the chairman of such a project would need to have the patience of Job and the win at all cost attitude of General Patton.  While possible and something I would love to see happen, I doubt this will occur in the recent future.

Avoiding pessimism entirely, I do think that, with the gnome accessibility API or the upcoming User Interface Automation from MS, one can write a killer web test tool that could access the browser on a system and generate a report of the information that screen readers have available to them and, in the event that a particular screen reader misses something, the canonical information report can point to where their bug exists.  Thus, the onus for guideline adherence can split between the AT developer and the web author and this tool can act as the referee.  This tool, if crafted properly, could contain a UI description table for each AT product so the tests can be performed employing the same sequence of actions that a user might follow which could add more realism to the entire process.  If the Windows screen reader vendors embed UIA in their products so they can better automate their testing, one might be able to also use the screen readers themselves in “test mode” to test the web document.  If I knew more about the ORCA scripting language, I could venture a bet as whether or not it can be scripted for use as a testing tool or not but, alas, I haven’t done all of the reading I should have by now.

One frustration I have felt while working at FS that Matt probably doesn’t share would happen every time I pointed to the report that showed how JAWS and HPR did work properly with the guidelines (mostly).  This would come from people who would say that, even though Fs and IBM spent the time and development dollars to try to comply with the guidelines, one had to consider the users of other AT products that had yet to catch up.  When one has busted his own ass to help the product he works on comply with standards or guidelines and then is told that the efforts his team and his friends at IBM went through to achieve this goal didn’t matter because other AT products who chose to spend their time on other tasks they deemed more important had to receive the same consideration as the good citizens who did our best to do the right thing would infuriate me.  Thus, jumping on the standards and guidelines bandwagon let us boast a bit but we still had to accept that people who used MSAA based or open source text browsers hadn’t
I share the rest of Matt’s frustrations about standards, guidelines, AT and mainstream developers and the whole system of finger pointing that has emerged from the ideals that caused the WAI to get started in the first place.  As an AT vendor, I would point to the web developer who would point to the DOM who would point to the accessibility who would point back to the AT.  As a community, we can remain in this circle jerk or take steps to fix the problem.

How do we move forward?

I don’t know.  I would like to say that we just jump up and down and insist that people follow the guidelines when making their web sites and AT vendors either fix their support for the guidelines or suffer the consequences of providing a substandard solution to their users.  If the browser doesn’t expose the DOM properly, then fix the browser.  If it’s an open source browser, you can fix it yourself.  If it’s open source AT you can fix it yourself.

The more I think about the document accessibility wars, the more I start to cringe in horror.  The entire accessibility world seems to have sweat bullets to get the Internet into something of an accessible form and, according to RNIB, more than 90% of all English language web sites do not comply.  On top of that, how many PDF documents are authored to the accessibility standards?  I think I’ve seen a grand total of about two dozen (out of thousands) accessible Flash objects.  Now, we have the emergence of ODF and its accessibility guidelines.  There’s also SMILE and Daisy and all sorts of other standards kicking around that leave full compliance up to the author.  

Part of the beauty of the Internet is the wild west like anarchist state that it is.  No one can enforce compliance to any standard on anyone.  This has led to a ton of really creative ideas and cool technologies; it has also led to a very chaotic state for people in the accessibility biz.  I really don’t see a clear plan for finding our way out of this mess either.

I apologize for being so gloomy on this subject.  I admit I still carry some resentments regarding the fact that JAWS and HPR got out way ahead of everyone else on standards compliance but had to hear the “what about the people using some other screen reader that ignored the guidelines,” so many times that I thought I would puke.  I understand the economic burden of switching screen readers both in terms of dollars to purchase a new product and in terms of time to learn a new UI but I also understand that the vendors of AT products that didn’t comply with guidelines did so by choosing not to and, instead, invested their development time and money in other features.  Why do we reward the vendors who ignored a published set of guidelines for so many years by working to suit their needs rather than insisting on sticking to a set of guidelines and letting those who choose to walk their own path wander as they may and try to sell product that falls further away from supporting web sites that do comply?  Isn’t this how a free market should work?

Needless to say, I’ve rambled and ranted enough today.  I really don’t have any good answers to these problems.  I share Matt’s frustrations that making a web site accessible to a broad range of AT products is too expensive and time consuming.  Maybe the idea of a live team of testers with access to all of the AT might be the only answer.  So Matt, give me a call if you want to start a non-profit company that can be supported by member dues and “by the hour” fees that can do such testing.

Also, if anyone else has an answer or even an idea on how to address this problem, please chime in with a comment.  I know and have been relatively friendly with a lot of people involved in the web standards and guidelines effort.  I will take the liberty of speaking for this entire community drawn from AT companies, IT companies, technology companies, academia, standards and government experts and all the others who participated in some manner by saying that I think we are all somewhere between somewhat and very frustrated with the overall lack of progress toward a truly accessible web.  There is enough blame for all of us to take our share home but we should listen very closely when people like Matt, Judy and others who put years of full time effort into the WAI speak up with their frustrations.  These people had to deal with a lot of big egos and a lot of smart people with good but conflicting ideas to come up with a set of guidelines that too many “accessibility” professionals choose to ignore entirely or in part.  The only way to standardize things well enough is to follow standards and, unfortunately, I don’t have the influence to get all of the AT companies to take notice and get with the program.

Subscribe to the Blind Confidential RSS Feed at: Blindconfidential

Published by


I'm an accessibility advocate working on issues involving technology and people with print impairment. I'm a stoner, crackpot, hacker and all around decent fellow. I blog at this site and occasionally contribute to Skepchick. I'm a skeptic, atheist, humanist and all around left wing sort. You can follow this blog in your favorite RSS reader, and you can also view my Twitter profile (@gonz_blinko) and follow me there.

12 thoughts on “The Quagmire of Web Accessibility”

  1. Interesting. I’m not sure I have all the answers yet either, as I have absolutely no experience in web design. However, I have consulted with a couple website owners/creators regarding accessibility of their respective websites. I am currently doing the same thing for a nonprofit organization whose mission is to create living opportunities for people with special needs. Regarding pdf documents, this website does have a few of them. I have thus far been unable to get these types of documents to read well or read at all with JAWS or Window-Eyes. I know there’s a way to do it because it even says so on Adobe’s website, and if I can get help from any of you on this I would be eternally greatful. I’ve even tried a couple programs which claim to convert pdf documents into other formats, but success hasn’t come yet.

  2. I think the only real solution to this mess, as you stated, is to have a team with access to all AT products. That’s a really interesting idea you have for starting up a nonprofit organization to fulfill this mission. I had thought about offering my services as a web designer/tester in some sort of business arrangement, but I was discouraged when I did some research and seemed to hear overwhelmingly “That’s a great idea, but we can’t pay you.” Perhaps you will have better luck. I’d love to help any way I can.

  3. If I understand the argument correctly I think you are expecting us AT vendors to reveal what technology we use to read what is on the screen when a web page is displayed? We currently all use completely different methods and I doubt you are going to get AT vendors to reveal any secrets – commercial sensitivity and all that. We at Dolphin certainly wouldn’t (grin) . It is worth bearing in mind that I designed Hal’s web access to work on web pages that weren’t correctly authored because most web pages out there aren’t. We have to design for failure. If your web page passes the W3C validation tools then Hal will be fine. That’s the validation tool I always refer web developers to.

    Ian Wild
    Dolphin Computer Access (UK).

  4. I thought Matt’s idea that software could sufficiently simulate AT to be naive, but could not articulate the reasons well. Your developer credentials are impeccable, and your writing lucid, so thanks for debunking such a notion.

    The analogy to cell phone testing and emulation modality doesn’t hold up to scrutiny. The one is thousands of hardware models with a known embedded and stable OS and simple operation that has already gone through an exhaustive QA process. Sure, there are lots of unknowns, but the situation is relatively straightforward to quantify and define parameters. In comparison, there are scores of AT applications, but the permutations are complex and dependent on a fragile and variable OS, subject to the skill of the end-user, and relatively buggy to start.

    I like your idea Chris that there could be a service based business for such testing. I was going to ask why you thought it had to be non-profit, but there have been several commercial endeavors to provide similar accreditation but none has been really successful.

  5. Update: I finally got satisfactory help with the .pdf problemI mentioned here. A few days ago a fully-sighted friend was able to configure my system for Adobe 7.

  6. Hi Friend,
    Congratulations for this nice looking blog.In this post everything about Web Design Accessibility have meaningful information that would be better for others who are interested in web Design.

Leave a Reply

Your email address will not be published. Required fields are marked *