Smart Homes, User Agents and People With Vision Impairments

Recently, in my life away from this blog, I have been doing a lot of research into smart devices, smart houses and how people with vision impairments may best be able to use them.  I’ve read about a few blind people hacking together their own smart spaces but we cannot expect those without a strong technical background and the zeal of a hobbyist to go the DIY route.

Since joining the V2 Standard Committee a few years back, ideas about smart homes and smart technologies have been at the forefront of my thinking.  Freedom Scientific did a pretty nifty job with PAC Mate Commander to let a PM serve as an infrared based universal remote control which, as far as I know, makes it the first user agent that can be operated by a blind person.  Unfortunately, the FS program only handles IR and does nothing about UP&P, Home Net or any of the other smart protocols.

In the past few years, an increasingly large number of mainstream companies have entered the smart appliance business (for the purposes of this article, I use the word “appliance” to mean anything electronic.  So, herein, an appliance can be anything from a dishwasher to an MP3 player to a router to a PDA.)  The list of companies that have jumped on the smart appliance bandwagon might surprise some readers as they are not those that we typically associate with high technology.  Companies like Sears/Kenmore, Maytag, General Electric and many others we ordinarily associate with “old” technology have started selling refrigerators, dishwashers and microwave ovens that can be connected to a smart house network.

What does this mean for blind people?

If all was perfect, adding intelligence to home appliances would make them much more accessible to people with vision impairments.  A user could take their user agent and, through it, have all of the controls on their oven read to them and, much like using a web page or accessible application, set the oven to do what they want.  This would, of course, also be true for all of the other appliances with annoying flat panel, LCD or on screen menus.

Unfortunately, we do not live in a technology utopia.  To begin with, there isn’t a user agent accessible to blind people available yet.  Next, AT companies do not seem to be cooperating with the appliance companies and have little or no presence at conferences like the Consumer Electronics Show or the huge home appliances conference held in Chicago every year.  Some of the mainstream companies are trying to build in self voicing elements that blind people can use but these are inconsistent and don’t always expose all of the features.  I find many self voicing products from the mainstream to be very frustrating because they talk so slowly and do not have a way to change the speech rate either.

What about a Universal User Agent for people with vision impairments?

This project can definitely be done and I know some people in universities working on the problem as I type this entry.  I urge my friends in research not to attempt to design new hardware to suit this purpose but, rather to use off the shelf Windows Mobile devices like the iPAQ running Mobile Speak Pocket as it will be the least expensive and least bulky solution.  If an off-the-shelf device won’t do the trick (for a deaf/blind person for instance) I recommend the PAC Mate from Freedom Scientific as it also runs Windows Mobile so the software can be written to work with both commercial PDA units and the PM at the same time.

What problems stand in the way of developing a user agent?

A lack of standardization is the biggest hurdle to success.  Unless a consumer wants to be locked into a single source for their smart devices, they may have trouble finding products that are compatible with each other.  A networking bridge can probably be created that can harmonize diverse standards but this would be a tricky bit of software engineering that could have a lot of reliability problems if not done in a letter perfect manner.

What do people with vision impairments want in a smart house?

This question is one I would like to hear answered by people who read this blog.  Me, I want everything and won’t be happy until every appliance, whether I have a use for it or not, has been made accessible.  I know that I want a way to work around the flat panel, LCD and on screen menus for the types of products I already own (refrigerator, dishwasher, washing machine, drier, stereo, DVR, VCR, television, electric piano, drum machine, sequencer, etc.  I don’t know which appliances other people want to use so cannot set priorities beyond the most obvious.

So, readers of Blind Confidential please send me your ideas on what you would like to have in your smart house of the future and I’ll try to find people who are working on such (like my friends at U. Florida) and see if we can get your ideas integrated.  If you do a quick google search on smart devices, appliances, systems, etc. you will find thousands of different products out there.  The smart home future is upon us and now we need to make it accessible to people with vision impairments.

Subscribe to the Blind Confidential RSS Feed at: http://feeds.feedburner.com/ Blindconfidential

Satire and The Use of the Word "Accessible"

Was the post about urinals serious?  Yes, it seriously satirized people who spend an inordinate amount of time and effort searching for things about which to scream discrimination.  I also intended to satirize the literary form which authors of such choose to write.  Finally, I wanted to say, “Chris, the level of seriousness on your blog has grown beyond proportion and its time to insert something as silly as possible to lower the tone a bit.”  Recently, the topics discussed here have moved to the very esoteric so I also wanted to include something as pedestrian as a public restroom to bring us back to issues that more than a few accessibility hackers find interesting.  I do find it amusing that at least one person took the post to be serious enough to write a long and critical response and I hope he returns reads, enjoys and perhaps learns something from the other posts that appear in the Blind Confidential blog.  People like Will, Peter and me and hopefully others who will join us in the future, are, in fact, among the top experts in access technology and will join the other research and development people to create the next generation and beyond of access technology.  So, in the future, if this blog gets far off serious topics, assume it is satire.

Today, I want to explore the definition of accessibility and how the term gets thrown around by different groups.  I will, in a serious manner, take a look at standards and the lack thereof and what that means for accessibility and jobs for people with profound vision impairments.

If a web validation and repair tool, like Ramp from Deque Systems, processes a web site and reports that the html passed all of the tests, can it immediately receive the “accessibility stamp of approval.”  The answer is no.  If a web site includes alternate text for its entire graphics, image map links and graphical links it may remain completely unusable by a person reading the page with a screen reader.  Specifically, if the authors wanted to find a way to get past a validation tool without doing any actual work, they could simply put the word “graphic” in every alt tag so, while a sighted user sees “News,” “Weather, “ and “Sports” the screen reader user hears “graphic, graphic, graphic.”  Thus the site can claim compliance but not accessibility.

Next, we may encounter a web site that puts useful text in its alt-text tags so the screen reader hears “News, Weather, and Sports” but the site contains so many other elements that finding the points of interest takes a tremendous amount of time, even if the user employs the efficiency features built into screen readers like HPR, JAWS and Window-Eyes.  So, can a site that follows some of the spirit of the accessibility standards and guidelines truly receive the label “accessible” if it requires a lot more time to navigate by a screen reader user than a sightie?

Moving on from web accessibility and onto desktop computers, what percentage of features of a program should a screen reader have available before an application gets called “accessible” and does the screen reader need to expose a usable interface before it claims compatibility with an application?  Virtually all Windows screen readers claim to be compatible with Microsoft Word, an application which is essential to the jobs of nearly everyone who works with a computer.  Virtually none of the screen readers work with more than fifty percent of the features in Microsoft Word so can the AT vendors claims of accessibility be considered true?  Or they 50% true if they only work with 50% of the features?

Recently, I did a very unscientific comparison between JAWS, Freedom Box System Access and Window-Eyes in Microsoft Word.  I didn’t have a recent version of HAL installed or I would have included it as well.  JAWS, according to its help file and my trials did the most but I could find numerous modes and features of Word that it couldn’t handle at all and, as some features can only be used through these other features, I could not get to a number of them to even test their usability (this being true for all three of the products I tried).  

Next I went onto FB System Access.  The Serotek guys have really come a long way in their latest release.  They do quite a lot of the things JAWS can do but they haven’t caught up entirely.  FB System Access does deserve applause for exposing some items differently from JAWS and, in some cases, they found a more usable way of presenting information which hasn’t been explored too deeply in the world of screen readers.  Of course, the Serotek guys have had all of the terrific features added to JAWS over the past seven years to use as an example and having a blind CEO who needs to use Word provides the Freedom Box guys with extra motivation to push the usability envelope.

Window-Eyes, I sadly say, continues to pull up the rear.  JAWS users have been able to read tables, columns and embedded spreadsheets in Word for years.  These, along with some other field detection features are recent additions to Window-Eyes.  Neither Window-Eyes nor Freedom Box did anything with the collaboration features (JAWS does a passible job) which any writer requires to work with editors to do a successful job, especially if a document has multiple authors – a common occurrence in many workplaces and academic settings.  So, in response to what I believe must have been market pressures, GW added some new features to make Word more usable for its users.  I recommend they take a copy of JAWS, its help file and the Freedom Scientific MS Word tutorial and use it as a specification for their next set of improvements to the world’s most popular word processor.

Both JAWS and Window-Eyes claim support for Excel.  Blind people who want to make spreadsheets beyond the most primitive, though, must use JAWS as very few features work well or at all with Window-Eyes.  Freedom Box promises better Excel support in its next release and its current support is similar to that in Window-Eyes.

All three that I researched and tested, though, came up far short of 100% accessibility (one of these days, I’ll set up three or four computers next to each other and create a table with MS Office features on one axis and screen readers on the other and fill in what does and does not work to generate an exact percentage and, perhaps, put a weight on each feature as to my perceived value of each feature to come up with a score that represents actual use rather than features for features sake).  All of them claim compatibility and, therefore, the user assumes accessibility.

Back to the question, what percentage of an application or system needs to be usable by a screen reader user so it can be called accessible?  Recently, Apple has added a screen reader to its Macintosh line of products.  According to Jay Leventhal’s scathing review, it probably works in less than ten percent of the computer that it comes with.  Can a Macintosh, with such pitiful usability by a blind person be called accessible?

Durable medical goods, like wheelchairs and such, as well as medical testing devices, blood pressure machines and the like, are all subject to FDA approval.  If these access technologies do not meet the government standards, they cannot be sold to the population that requires them.  I do not want to suggest that the FDA or any other governmental body get into the regulation of screen readers but I do wish there could be a voluntary program run by an independent organization (ATIA?) that can publish detailed comparisons of screen readers that includes a lot of objective tests (does it work with feature x yes or no) and some subjective tests (can a user complete a task more easily or in less time with product a, b or c?).  This can finally give screen reader users a way to see past the marketing hype and half-assed solutions that often carry the label “accessible” in the sales literature from AT companies.

In my mind, a product must be “usable” before it can receive the label “accessible” and that AT companies should do their best to ensure that this is truly the case.  Before claiming that something is accessible please make sure that it can be used by your customers.  Right now, as I mentioned in my post that talked about how Eric Damery blazes the usability trail, I think that Freedom Scientific and JAWS still lead the pack but my friends at Serotek are making a mad dash to catch up and the market share frontrunners like FS and GW should be looking over their shoulders a bit before Mikey catches them.

Subscribe to the Blind Confidential RSS Feed at: http://feeds.feedburner.com/ Blindconfidential

Discrimination Against Blind Men Due to a Lack of Standards

A number of years ago, I wrote an article about a pervasive problem in the design of public facilities that directly discriminates against men with vision impairments.  This paper contained footnotes, statements from human subjects, statistical analysis of the distribution of the offending design problems and a number of case studies describing real world problems.  I sent this work off to many different publications involved in blindness issues, universal design, architecture and construction.  None of these publications saw fit to run my item and few even provided me with the courtesy of a response.  As I said, this problem effects men with vision impairments, there may be an analogue among blind women but I have not studied such so, to my female readers, if you have encountered problems similar to those I describe for men below, please write to me so we can expose that discrimination as well.

This blog entry will contain a shorter and less scholarly description of the problem and how it affects blind men.  I will not include citations, footnotes or other aspects of academic publications as this will not be peer reviewed and doesn’t need such.

Description of the problem:

There exists a rarely discussed area of discrimination against blind men resulting from the lack of standardization of urinals and other bathroom fixtures.  The outcome is that men with profound vision impairments often end up with their own bodily fluids and that of others on their bodies and clothing.  This is disgusting, unsanitary and unhealthy.  It is also degrading in the worst possible way.

The manifestation of the problem:

When a blind man enters a public restroom, a place of public accommodation and, therefore, subject to the ADA requirement for reasonable accommodations, for the first time, he knows either the layout of the room or the type of fixtures he may encounter.  If he is in the public restroom alone, he cannot ask anyone where the urinals, sinks or toilets are located.  So, the independent man with profound vision impairment must start swinging his stick around in hopes of hitting porcelain.

When he finally locates the type of fixture he needs, he must then figure out which design of said fixture he has found and adjust him to use it appropriately.  Herein lies the problem, many, if not most, public restrooms are cleaned far less often than one would think necessary to maintain truly sanitary conditions.  Even the most meticulously cleaned restrooms may have encountered a biological disaster shortly before the blind man enters so one must assume that even these might contain hazardous fluids.

Thus, how should a man with profound vision impairment approach the situation?  When the blind man walks toward the urinal he has located with his cane, one of his legs may bump into part of the urinal that extends far beyond the portion that his cane has touched.  This, if a previous user missed the target a bit, may result in a stain on the blind man’s clothing and there is no one who will pay the dry cleaning bill for something that was clearly the fault of others.

Surely, he should not start feeling his way around until he determines the shape of a urinal and the location of the target as this would require his hands to touch potentially dangerous fluids.  I know of no blind men who travel with dispensable rubber gloves and I don’t think that it is right to expect them to do so.  The blind must, in lieu of using his hands, must poke around with his cane to determine the shape of a urinal and then make a best guess as to the target.

With the presumed location of the target in his mind, the blind man must then find a place to lean his cane and then open his pants, take aim and fire away.  Here resides the second problem, the different shapes, heights and sizes of urinals create different splash back patterns and, those that locate their target on the floor, can easily be missed, resulting in peeing on one’s own shoes.  Variants on splash back also arise from different water levels in the urinals, those with a lot of water will have fewer splashes than those that are mostly damp but not filled with fluid.  Thus, if the man with profound vision impairment misses the target (also known as the sweet spot in the lexicon of plumbing) the splash back can be fierce and result in the gentleman ruining a pair of pants.

Once the man with profound vision impairment has finished using the urinal, he must now find the flusher.  This problem also curses toilets.  Groping around in an unsanitary place to find a handle, knob or, in some cases, push button results in acquiring any and all kinds of fluids on one’s hands.  The blind man must then pick up his cane to go off to find the sinks.  Thus, transferring some of the hazardous fluids onto the handle of his cane.

Once he finds the sinks, he must grope around to find the soap and then wash his hands.  He should probably also wash the handle of his cane as it has also been exposed to this bio-hazard.

Finally, the blind man must find the paper towels.  This often results in more groping in an unsanitary place which effectively defeats the purpose of having washed his hands in the first place.  

So, the lack of uniformity in the design of public restrooms and the fixtures installed therein result in one of the most deplorable hidden discriminations against men with profound vision impairments.  A longer exploration of this matter will include non-standard toilet paper dispensers and those racket ball court sized toilet stalls designed for our friends in wheel chairs.

Subscribe to the Blind Confidential RSS Feed at: http://feeds.feedburner.com/ Blindconfidential

Interesting Electronic Mobility Systems

Like Saturdays, I don’t typically post here on Sundays either.  I like to take the weekend off, let new ideas gel and start the week anew on Mondays.  Yesterday, though, a friend sent me two articles about how different mobility systems get used by people with vision impairments.  The first was a comparison between BrailleNote GPS and Trekker from Humanware and StreetTalk from Freedom Scientific.  This review appeared in the February 2006 issue of the Braille Monitor from NFB.  The Braille Monitor article covers only the solutions that were demonstrated at the NFB conference last July.  In the following seven months, another talking GPS system, based upon Wayfinder, that, when used with Talx or Mobile Speak on a Symbian Series 60 cell phone or with Mobile Speak Pocket on a PDA/Phone like the HP 6515 which comes with the GPS receiver builtin, provides an excellent solution for considerably less expense than those from the AT vendors.  The Wayfinder solution, which I use on my HP 6515, is my favorite and I think it is certainly worth checking out.  Wayfinder provides a free five day demo on their web site, the installer works nicely with JAWS so, if you already have a phone running Talx or MS or you have a PDA/Phone with MSP, I suggest you give it a try.

The second article, a press release really, describes a system that sounds someone like Talking Signs but, just to keep things confusing, is also called “Wayfinder.”  Nonetheless, it sounds like our friends in the UK are doing some pretty nifty things with technology in open spaces.

The following article comes to us from Birmingham, England:

Birmingham City Council, UK
Saturday, February 11, 2006

Wayfinder system in Birmingham City Centre for Blind & Partially-Sighted
People

By Press Release

Summary A new facility to help blind and visually impaired people navigate
their way around the heart of Birmingham city centre will be launched in
Spring 2006. The 60 Wayfinder units will be installed around the city
centre, providing users with practical audible information, to confirm their
location and assist them to reach their destination safely.  

Most units are being installed on existing street furniture to minimise
street clutter and, where no street furniture exists, being fixed into new
purpose built stainless steel posts located at the back of footways. Users
will carry a trigger card to activate the speaker unit when within range.
These triggers will be made available in Birmingham’s principal languages.
Details on how and where to obtain the triggers will be available shortly.

The total cost of the Wayfinder scheme is #165,000, #65,000 of which was
recently agreed by Councillor Len Gregory, Cabinet Member for Transportation
& Street Services. Cllr Gregory said; “This is an excellent system,
assisting blind and partially sighted people find their way around
Birmingham city centre. It will help people more easily find transport in
the city, their places of work, shopping venues, public services and visitor
attractions, making Birmingham an even more accessible city”.

The city council has worked in partnership with many other agencies on this
project, including The Royal National Institute of the Blind (RNIB),
Birmingham Focus on Blindness, Guide Dogs for the Blind, Queen Alexandra
College, National Federation for the Blind, BBC Birmingham and others. Many
of these organisations have been represented by people with a visual
impairment.

Rob Legge, Chief Executive, Birmingham Focus on Blindness, said; “Sight loss
is a frightening and traumatic experience that affects almost every aspect
of a person’s life! Our aim is to help the 30,000 children and adults in
Birmingham who have sight loss to achieve a better quality of life.
Wayfinder goes a long way to achieving this. For people with sight
impairment, travelling around the city independently is a major problem, so
Birmingham Focus is delighted to be working with Birmingham City Council and
others on the Wayfinder project.”

Following the launch, the City Council will be encouraging users to give
their views on Wayfinder to enable the system to be fully adapted to their
needs.

Reference Number 8375
Press contact Kathy Williams 0121 303 3764
Issue Date 10 February 2006




http://www.birmingham.gov.uk/GenerateContent?CONTENT_ITEM_ID=76039&CONTENT_I
TEM_TYPE=9&MENU_ID=276



Subscribe to the Blind Confidential RSS Feed at: http://feeds.feedburner.com/ Blindconfidential

Gnome Accessibility and Complex Data Relationships and a Little Extra

I rarely post to Blind Confidential on weekends but, today, I wanted to present a couple of short statements that are actually about this blog and its contents recently.  One will actually include a little more on the API conversation and the other will talk about the perspective of some of my posts.

On the matter of complex relationships and accessibility layers:

This morning, I read the post made by Anonymous about the relationships possible in the new gnome accessibility API.  Before that post, no one had brought this to my attention and I accept the blame for not having done a thorough enough amount of research on the matter before saying that it couldn’t or wouldn’t happen.  The relationships page in the gnome API documentation clearly demonstrates that it can happen and, in fact, people are doing it today.

I still do not know how to motivate application developers to add this to new programs or to retrofit it to the billions of lines of code already out there but, as far as I could read, this API should do the trick quite well.

I may have an answer to my economic argument as well.  This may already be the case as I haven’t spent the time to read up on how the gnome API attaches the relationship facility to the rest of the system but, if the idea doesn’t already exist, I propose that the relationship system be attached to the help system.  In many applications that use complex data relationships, users without disabilities often find themselves lost in the maze of information.  They find that making one change to the data causes a ton of side effects that they didn’t predict.  Microsoft Excel has a pretty nice little window that displays the dependency tree in a spreadsheet but, to some sighted people I’ve asked to look at it, the diagram gets far too complicated to understand in truly massive and highly complex spreadsheets.  I have witnessed sighted and blind users alike struggle with predecessor and dependency relationships in MS Project which could also be simplified by a system like this.

My notion of attaching the relationship facility to the help system will provide an answer for mainstream and AT users alike.  In real time, someone using a project management tool can query, “What will happen if I change this value or break this link?”  Having the relationship tree in a manner that can be delivered sensibly to humans will solve a huge number of problems for AT and be enormously useful to anyone who has done a handful of “what if” changes to a spreadsheet and then cannot figure out why the whole thing has gone kind of nutty.

I feel like a kid at Christmas as this seems to be exactly what I spent so many hours in AT/IT compatibility meetings, Accessibility  Forum meetings and in every other venue where I could speak, banging on tables, insisting on a mechanism to expose complex contextual relationships in applications.  My hat goes off to Peter and the other fine hackers behind the gnome accessibility project.  Please tell me, privately or here, how I can get a demo of this in action.

Why has Blind Confidential been so gloomy lately?

I’ve looked back at posts I’ve put here in the past couple of weeks.  I find that I write far more critical pieces about the past, present and future of technology and people with vision impairments.  I do believe that most of the criticism I’ve presented can be validated and should be remedied.  I also like hearing people point out where I said something false so it can be corrected.  In reality, though, I have a very positive outlook on the future of technologies for we blinks and will post about the more optimistic issues in the future.  I have been very busy with other projects lately and I guess that, in the short time I have to spend writing this blog, I find that negatives come faster or are easier to discuss the cool exciting new stuff in research and technology.

My current life has me working as a freelance writer, an itinerant research science and a self described purveyor of discount wisdom.  This is a very cool way to spend one’s life.  I get exposed to the coolest new research, get to go to the cool egghead conferences where people push ideas rather than sell products and I get to hang out with a lot of really smart people whose agenda is to further science.  If I had been given the opportunity to describe my dream job it would not have come out as good as this.

So, sorry for so much criticism lately.  I’ll try to get more researchy stuff and good news like I posted about the gnome accessibility layer above.

Subscribe to the Blind Confidential RSS Feed at: http://feeds.feedburner.com/ Blindconfidential

AT Products and OS Security

Recently, the discussion on Blind Confidential has grown increasingly esoteric.  I doubt that too many people care as intensely about accessibility API layers and where they live as those of us engaged in the discussion do.  Today, I’ll move onto another topic that will explain to many why the accessibility layer discussion has such great importance and why, ultimately, it must reside at the operating system level and why you should care.

One of the dirty little secrets of the AT industry regards how virtually all screen readers and magnifiers, on screen keyboards and other very important programs used by people with disabilities, at some level, compromise system security.  Some Windows based AT products (none from Freedom Scientific I am happy to say) go so far as to turn off some security settings in the Windows Registry during their installation process and require that they remain off to function properly.  All AT products that work in the login screen and anywhere else that passwords get entered compromise system security.

At this point in time, though, no other techniques exist to make these aspects of a computing environment accessible to people who need to use the type of AT products that hook the keyboard, mouse and/or video systems.  The Macintosh screen reader, on screen keyboard and magnifier are the exceptions in this case as they do not hook video at all and they get information from the keyboard and mouse through the Apple accessibility layer.  Unfortunately, the Apple screen reader isn’t, according to Access World, very good and a professional could not use it in a workplace.  So, this leaves the AT users in a quandary, compromise security or don’t use programs, computers, web sites, files or anything else that requires a password.  Neither of these options provides a good outcome.

What about AT products compromise security?

I will use screen readers as my example but the same problems exist in other types of AT products too.  If a screen reader can say, “star star star…” while the user types in their password and also respond to its built in keystrokes at the same time, it can know your password as it looked at every keystroke to determine if you were issuing a command or typing text.  Fortunately, all of the AT vendors I have ever met, which probably represents a large sample of the business, are overwhelmingly scrupulous people who would never use this information in an illicit manner.  Unfortunately, if the operating systems permit AT products to hook sensitive aspects of the information stream, they also provide the opportunity for the nastiest of the network criminals to do the same thing.  So, to keep computer systems entirely secure, the OS developers need to close off some of the ways AT products have traditionally received information.

Do AT products make my computer any less secure?

The AT programs that change security related registry settings do, in fact, make your system less secure.  I know for certain that JAWS, MAGic, Connect Outloud and Serotek’s Freedom Box System Access do not make such registry changes.  I have not paid close enough attention in the past year to whether or not those that had done this in the past have fixed this problem so I will not name names as they may have remedied the situation already.

Windows based AT programs that do not change these registry settings do not make your system any less secure than any other piece of software.  The techniques used by AT developers to gain access to this information shows up on various hacker oriented web sites around the world with very good documentation as to how one can do these things.

So, the bad guys already know how to do this stuff and we all spend money on virus checkers, spyware eliminators, firewalls and other system security programs to make sure that the work of the nefarious types stays off of our computers.

Is Windows the Only OS Subject to These Problems?

I don’t think so.  The Macintosh probably creates the greatest difficulty for the bad guys but I don’t know enough about the GNU/Linux platform to make a truly informed statement about it.  I will say that the text based, SpeakUp screen access utility that I use on my GNU/Linux box could present a huge security threat in that, to install it you have to modify your operating system kernel which means that your screen reader has access to the lowest level and most dangerous information on your system.  Fortunately, if you make sure you get your SpeakUp distribution from a reputable source (like the project’s own web site) you can be sure that it is safe as the people who maintain that distribution are also users of the software.  Also, open source screen readers expose their source code to the entire world so can, therefore, be inspected by other hackers to make sure that nothing illegal has been added.

Why will a new accessibility API be better?

If the accessibility layer lives at the operating system level, it can enforce the same security constraints on all programs.  By removing direct access to the input and output streams, the operating system itself becomes more secure and, therefore, less prone to invasions by software with criminal intent.  Unfortunately, this means that AT users need to rely upon information delivered by the accessibility API and, if this information is less rich than that which they can get today, it will result in computers becoming less usable by people with vision impairments and probably other disabilities as well.

What does the future hold?

As I know people like Peter from Sun, Rob and others up at Microsoft and Mary Beth and Travis from Apple personally, I can say that they are all working very hard on the next generation of operating system accessibility interfaces.  They are all very smart and highly dedicated people and those who work with them in the various assistive technology groups at OS companies put in everything they can muster to make the next generation as good as they can.  They also accept a fair amount of input from AT and application developers alike and integrate much of this feedback into their designs.

I’m not sure that I will ever find an accessibility layer in an OS to expose everything I want but some of this may be impossible (reference my comment on a telepathy API earlier this week).  The best outcome will happen when the OS, AT and application developers can all meet their needs with a single solution and I hope that through outreach and communication this will happen someday.

What do I mean when I use the word hacker?

I am an old timer in the programming world.  I started programming as a hobby when I was eleven years old when I first got access to a PDP 8 at Lawrence Berkeley Labs and have been hooked on it since.  I turned professional in 1979 and have worked in the field ever since.  Back in the old days, before the uninformed media got hold of our vocabulary, the word “hacker” meant, “very talented and curious programmer type.”  There are good hackers, people like Richard Stallman, who hack for the benefit of the entire world.  There are criminal hackers who use their skills to break into systems and steal money and/or information.  There are tourist hackers who will, illegally, work their way into a secure system just for the challenge of doing so.  The tourists are mostly harmless but are trespassing and, therefore, breaking laws.  

Then, there is the group I dislike more than any of the others; these are what I call the vandals.  It is the vandals who launch worms and viruses just for the sake of messing everyone else up.  It is these vandals who write stupid Outlook scripts to send emails to everyone in your address book.  The vandals aren’t even hackers, if you use the definition I put in above, they are just troublemakers.  The overwhelming majority of vandals have little talent and the nasty programs they set upon the world can usually be written by any high school kid with a copy of Visual BASIC and a little free time.  They are the technical equivalent of kids who throw rocks through windows or spray paint cars.  They are not hackers.

Subscribe to the Blind Confidential RSS Feed at: http://feeds.feedburner.com/ Blindconfidential

Interesting Articles and a Little More on Accessibility APIs

I didn’t have much time to think of a topic for today’s post so, instead, I’ll provide some pointers to a few articles I’ve read recently that I found of particular interest.  At the bottom, I will add a few comments in response to Will’s post on our accessible API discussion as well.

These articles are in no particular order:

The first is about elders and their use of technology products, the Internet and Pocket PC devices.  It is in the UI Design Newsletter and is called Selling older users short.  It debunks various myths about older people and their use of technology products and, in my opinion, shows a promising market for the Code Factory Mobile Speak Pocket product in a very large and mostly untapped market.

Next is an article about a very interesting motor sports event in India titled ” blind navigators show the way” which could be a good application for using StreeTalk from Freedom Scientific, Wayfinder with either MSP or Talx or one of the other talking GPS solutions.  The page containing This article isn’t amazingly accessible but you can find the important spots with a little poking around.

In my sighted days, I truly loved the visual arts.  Those of you who know me well know that I remain active in less visual fine art media like literature, poetry, music and, most recently, I’ve added tactile and audio arts to my interests.  Touch tours are becoming increasingly popular at museums so here are a couple of articles about them, “No longer impossible: blind embrace art and museums welcome blind” and “Museums make art accessible to blind.”

Remaining in the art world, here’s a pretty interesting article on a difficult web page about a blind artist, “Blind artist overcomes challenges.”  I am fairly certain that this is the first article I’ve ever read in the Pocono Record, a publication I had never thought I would ever read.  Isn’t the Internet swell?

Here’s an item from Japan (in English) about a new plastic sheet technology for displaying Braille.  I don’t know anymore about this than what is on this web page including how recent this innovation may be but I thought it was apropos to the discussion about haptics going on here lately.

Special thanks to Professor William Mann, Eric Hicks and, most especially, Lisa Yayla, the owner and unofficial research librarian of the Adaptive Graphics mailing list hosted on freelists.org for sending me these pointers.

Back to APIs

Yesterday, Will Pearson posted two very well considered comments.  As I had guessed, he had some very valuable things to add to my deaf-blind posting and his ideas on accessibility APIs are also well founded.

I agree that for generic information building accessibility into the user interface library would solve many, even most accessibility problems.  While Microsoft did not build MSAA into MFC (the popular C++ library) they, instead, chose to put at a lower level, in the common control layer.  This decision demonstrated some very good outcomes but only in applications that used standard controls.  Putting MSAA a level up in MFC would have solved the problem for some custom controls used in MFC applications but would have done absolutely nothing for Win32 applications or programs written using a different set of foundation classes for their UI that employed standard controls.  So, Microsoft solved some of the problems by providing support for all applications that used standard controls, written using MFC or not but relied upon the application developers to add MSAA to controls that diverged from the standard.  

Unfortunately, most Windows applications, written using MFC, WTL or some other library, use some to many inaccessible custom controls.  Also, a major problem for accessibility APIs as we look to the future are the applications that use proprietary, cross-platform UI libraries.  

Tom Tom, the popular GPS program, is one example of how a proprietary, cross-platform UI library will render their application completely inaccessible.  If someone installs Tom Tom on an iPAQ running MSP or on a PAC Mate they will find that the screen reader will only be able to “see” some window titles and an occasional control.  Tom Tom, to maintain a uniform visual look and feel across all of the platforms they support (TT runs on Windows Mobile, Palm OS, Symbian, iPod to name a few) they have created their own, completely inaccessible UI library.  Tom Tom doesn’t even load standard fonts from the OS but, rather, builds a font library into their software.  This permits them to keep their trademark appearance consistent on all platforms but completely destroys the possibility of any screen reader gaining access to their information.  (Off topic: if you need a portable talking GPS solution, buy Wayfinder or StreeTalk as they work very well.  Wayfinder, from the mainstream, is much cheaper than Tom Tom and StreetTalk is less expensive than the others designed specifically for blind users).  So, even if an accessibility API existed on the platforms where Tom Tom runs and it was at the class library or user interface level, it wouldn’t work.

The combination of cross platform development and the desire to have a unique look and feel cause two of my lasting fears for the next generation of accessibility APIs – especially when we factor in the labor costs of retrofitting a new, even if cross-platform, user interface library to the billions of lines of code already deployed around the world.

Moving from the pragmatic and returning to the delivery of contextually interesting semantic information, I have yet to see how a generic control can have enough knowledge of its purpose to deliver truly useful information about what it is doing at any given point of time.  A button control, a table control, a list box control or a tree view control to name a few, don’t understand what they contain nor why they are containing it.

I’ll return to our Visio organization chart example.  Let’s imagine a very simple box with five names in them, Will, Chris, Peter, Eric and Ted.  Because Ted is a hall of famer, we’ll put him at the top and because Eric and Chris are managers, we’ll have them report to Ted.  So, our Ted box has two arrows coming from it: one to the Chris box and the other to the Eric box.  Because Will is a hacker, he will report to Chris directly, so we’ll add an arrow from Chris to Will.  As Peter is an ideas guy and a hacker, he will report directly to Eric but indirectly to Chris and Ted, so we’ll add a solid arrow from Eric to Peter and dotted arrows from Ted and Chris to Peter as well.  Now, just to make matters interesting, we’ve decided that the ideas guys get to set priorities so Peter and Eric will have dotted lines pointing to Chris as he must have the engineers build what they design.

Our organization has six boxes, one for each person and the bounding box that contains the members.  If we assume that our accessibility API is extensive enough to include a rectangle control that understands that it might also be a container and a line control that knows its attributes (dotted, solid, etc.) we still do not have enough information to describe the relationships between the boxes unless the application itself provides a lot of supplementary information about the meaning of boxes and lines as they are used in said application.  We can derive this information from the Visio object model but not from a generic collection of controls at any level below the application itself.

Peter suggested that some hybrid might also be a good idea where the AT product gets most of its information from the accessibility API and the truly application specific information from the actual application.  I still think that this requires that the application developer do a fair amount of work to expose this information in a usable manner.

Subscribe to the Blind Confidential RSS Feed at: http://feeds.feedburner.com/ Blindconfidential

G3 Interfaces and Deaf-Blind Users

Yesterday, Chris Westbrook, a fellow I know from various mailing lists and one whom I think quite often has interesting things to say asked about research into and the potential efficacy of a 3D audio interface for people with both vision and hearing impairments.  Until then, I hadn’t considered deaf-blind people in my analysis of how to improve the efficiency of screen reader users.  Deaf-blindness is not my area of expertise and, fortunately, is a relatively low incidence disability.  Our deaf-blind friends do deserve the best access technology that the research and AT world can develop for their use and I will try to take a stab at addressing some issues that deaf-blind people might encounter and how their screen reading experience can be improved.  As I said, though, I cannot speak with much authority on this subject so please send me comments and pointers to articles so I can learn more.

Before I jump into a pontification on my views of technology for deaf-blind users in the future, I want to relate an amusing anecdote about an incident that occurred involving me at CSUN 2004.  

CSUN takes place every march at the convention center hotels at the Los Angeles Airport (LAX).  That year, Freedom Scientific demonstrated some of the first PAC Mate features designed for use by deaf-blind people.  I stayed at the Marriott that year and, as my daily routine dictated, I stood on line at the Starbuck’s in the lobby seeking my triple shot vente late.  While waiting and chatting with Jamal Nazrui, who was in line in front of me, I felt a tap on my shoulder and turned to face the person who wanted my attention.  As soon as I turned around, a pair of arms enveloped me in a very nice hug.  By the location of the anatomical parts of this affectionate person, I could tell immediately that she was very definitely a woman.  Then, to my surprise, a very deep male voice with Scottish accent started talking.  I, somewhat startled, thought that I was either in the embrace of the recipient of the most successful sex change operation ever or that I had underestimated the depth of Scottish women’s voices.  Then, my human skills to understand context kicked in as I, still in this very pleasant embrace, heard the speaker tell me how “she” greatly appreciated the terrific effort we made in the PAC Mate to add features that deaf-blind people could use to communicate with each other.  The recognition that it was an interpreter talking changed my perspective greatly.

The day before, Brad Davis, VP of Hardware Product Management at FS, brought four deaf-blind people into the Freedom Scientific booth on the trade show floor.  He gave each of them a PAC Mate that had a wireless connection established with the network.  He showed these people how to launch MS Messenger and they started a chat among themselves.  Ordinarily, blind people with profound hearing impairments need to communicate with each other by using sign language by touch in the palm of the other person’s hand.  Thus, it remains very difficult for more than two deaf-blind people to hold an efficient conversation together.  With a PAC Mate in the Freedom Scientific booth that day, four deaf-blind people held the first ever conversation of its kind.  The happiness all of them displayed with this new tool brought all FS people around one of the greatest senses of satisfaction with our work that I can remember experiencing.

Freedom Scientific has, since that day, gone onto add a number of additional PAC Mate related products designed for use by deaf-blind people to their catalogue.  I don’t know much about these products so go to the FS web site to learn more about them.

Now, back to the topic at hand, how can a G3 interface be designed to improve the efficiency of deaf-blind users?

Again, I’m pretty much guessing here but I do have a few ideas.  I’ll start with the work my friends at ViewPlus are doing with tactile imaging and add a bit of Will Pearson’s work on haptics.  Today, these solutions carry fairly hefty price tags as is the case for most AT hardware.  They can, however, deliver a lot more information through their two and three dimensional expressions of semantic information than can be delivered through a single line Braille display.

A person can use a tactile image with both hands and, therefore, can, by knowing the distance between their hands, determine the information inherent in the different sizes of objects, the placement of objects in relation to each other and the “density” of the object from feeling various attributes that it can contain.

Thus, a person using a tactile pie chart can feel the different sizes of the items in the graphic and, far more easily than listening to JAWS read the labels and the values sequentially, learn the information in the chart through the use of more than one dimension.  This idea can also be applied to document maps, road maps and far more types of information than I can imagine at this moment.

Here, however, is where my ignorance stops me entirely from moving any further.  I can make some wild guesses as to how a force feedback haptic device might be applied to improve efficiency but I cannot do so with any authority at all.  I can’t even recall reading a single article on the topic of deaf-blind people and next generation interfaces.  Sorry for my lack of knowledge in this area and, as I stated above, please send me pointers to places where I can learn more.

Why is JAWS Used for My Examples?

JAWS is the screen reader I know the best.  It is the screen reader I use on a daily basis and, in my opinion; it is the best and most comprehensive screen reader available today.  No other screen access tool can deliver even close to the amount of contextual information as can JAWS.  Most other screen readers also entirely ignore the more advanced features of programs like Word, Excel, PowerPoint, Project and others which I need to do my job.  Without the broad range of access that JAWS delivers, many blind professionals would never have been able to get promotions to better positions as they could not use the tools that their sighted counterparts do in the same workplace.  

I can be very critical of all of the G2 screen readers because I study all of them and because I rely on them to do my job.  I hope that my criticism is seen as constructive as I think the screen reader companies (FS, Dolphin, GW Micro, Serotek and Code Factory) as well as those who work on gnome accessibility, GNU/Linux accessibility, Java accessibility and that peculiar little screen reader for Macintosh are all pushing access for we blinks in the right direction.  If I find fault with the products or businesses, I will say so as I feel that is my duty to my readers and to the blind community at large.  I do so with the intent that these fine organizations make improvements rather than to tear them down in any way.

Subscribe to the Blind Confidential RSS Feed at: http://feeds.feedburner.com/ Blindconfidential

Pros and Cons of Accessibility APIs: Notes on Peter Korn’s Blog Post of 2/6

Since starting this blog, I have discussed a number of different AT products that run on the Microsoft Windows platform almost to the exclusion of Macintosh, GNU/Linux and any other operating environment that a person with a vision impairment might want to use.  I’ve also made mention of MSAA and the upcoming UI Automation (both Microsoft technologies) to the exclusion of the gnome accessibility project and the Macintosh Accessibility API for Apple’s OSX platform.  This exclusion stems from my own knowledge, which is almost entirely about Windows, and because Windows is the overwhelmingly dominant operating environment used in places of employment around the world.  I also entirely ignore AT products outside of those for people with vision impairments as I have virtually no expertise (outside of conversations with some vendors of such products who are also personal friends) in that area.

Peter Korn, Sun Microsystems accessibility architect and long time accessibility advocate, in his February 6 blog entry, makes note of this and offers a different opinion as to the cause of the continued accessibility problems that I described in the posting I wrote about the failure of competition in the AT industry.  Peter, as often is the case, brings out some very good points, some with which I agree and others, like the failure of competition to deliver better solutions to AT users, with which I disagree.  Peter also mentions my near total exclusion of operating environments from vendors other than Microsoft which I do hope to correct in this post and in others to follow.

Peter correctly states that in a later post about contextual information, I point out that MSAA, the accessibility API for Windows didn’t offer enough to make mainstream applications truly accessible and he goes on to say that a richer accessible API, like that being used by the gnome accessibility for GNU/Linux platforms, the Macintosh accessibility API for OSX and Microsoft’s upcoming UI Automation API will do a much better job.  I agree entirely with the assertion that better accessibility APIs built into the OS will make support for individual applications much easier and that AT companies will not need as many consulting dollars to make a specific program that is compliant with such an API accessible to its users.  I also agree that a truly rich API will increase the kind of competition among AT vendors that will benefit users by a healthy amount.  I also agree that such an API will also make it possible for new and smaller players to succeed in the tight AT marketplace.

I disagree, though, that the breaking of ranks among the AT vendors (see my earlier post on competition or yesterday’s post on Peter’s blog) didn’t largely give mainstream software vendors a free pass to providing substandard accessibility solutions while claiming 508 compliance in their VPAT.  Specifically, the break in ranks among the vision related AT companies caused chaos among the mainstream vendors in their decision to adopt a standard API or not.  GW Micro argued that MSAA provided enough and pushed mainstream vendors to use it, Freedom Scientific quite publicly took a nearly anti-MSAA stance (due to the inadequacies of the API) and promoted the exposure of information through a document object model (DOM), like that in the MS Office suite of products, Corel Office products and Firefox.  FS promoted the DOM described by the WC3 Web Accessibility Initiative (W3C/WAI) as the solution to Internet accessibility.  While DOM solutions require a lot of customization to support each application, they also provide a profoundly large amount of information about the data being exposed.  Recently, Serotek, in Freedom Box System Access and GW Micro in more recent Window-Eyes releases have adopted support for the DOM concept in their versions of internet and Microsoft Word Access and both have received acclaim from their users for doing so.

Taking a pure DOM approach to accessibility does have many faults.  As I state in the previous paragraph, each application may have a different model and to provide contextual information (like the relationship between two random cells in a spreadsheet or the relationship between multiple boxes in an organization chart) will require customization to deliver such information to the user.  Next, as Peter correctly asserts, most application specific object model solutions are also operating system specific.  Thus, an application developer that wants to make their multi-platform solution accessible must do custom work to support Windows, GNU/Linux and Macintosh separately.  

In a perfect world, a cross platform, rich accessibility API would be the ideal solution.  Unfortunately, we do not live in a perfect world.  Returning to the negatively valued competition between the Windows based AT companies, the chaos and historic lack of cross platform compatibility of the accessibility APIs caused many mainstream application developers to choose to do nothing regarding accessibility and place the onus of making applications that do not conform with any standard entirely on the shoulders of AT companies.  Thus, in many cases, only the wealthiest of the AT companies could provide even a passable solution – a reality that only gave benefit to the large application developer who put “screen reader compatibility” on their VPAT.

Furthermore, many application developers, large and small, who only have products on the Windows platforms refused to add MSAA or DOM support because of the enormous cost of retrofitting an accessibility API to tens of millions of lines of source code.  Huge corporations simply said “no” to do anything to their products to promote accessibility.  They refused to add MSAA, they refused to add support for the Java Accessibility API, they refused to add a DOM, they refused to even follow the W3C/WAI guidelines on their web sites and, yes, they refused to hire the relatively tiny AT companies to help them do any of this.

Furthering the argument that AT companies will work around compliance with standards to both improve the experience for their users and to, intentionally or not, give the free pass to the big boys is demonstrated by the back flips and hoop jumping that all credible screen readers do to work around web sites that conform poorly with the W3C/WAI guidelines and the Section 508 standards.  The WAI guidelines have been around for a long enough time to have permeated the consciousness of large corporations.  Web accessibility validation and repair tools have existed for a fairly long time (in software years) so big companies can even automate a large portion of making their sites accessible.  Unfortunately, even with loud complaints from individual users, organizations like NFB and AT companies, compliance with accessibility standards seems nearly impossible to find a place in the minds of corporate web developers.  How then can we expect these same companies to retrofit millions of lines of source code to comply with an accessibility API?  What would motivate these same big corporations to comply with accessibility standards if the vendors of the most popular AT products provide them with techniques to work around their accessibility problems?  Where is the profit motive for a large software vendor to modify their products to comply with an API if they can state that they are compliant with a screen reader and continue to get millions of dollars of contracts from the Federal government?

I cannot speak to either the gnome or Macintosh accessibility efforts until I do some more research.  I do have a GNU/Linux box in my home but it does not run gnome but, rather, uses emacspeak and SpeakUp in text based consoles.  It’s only an old Gateway with 128 mb of RAM and a 450 mhz processor so I think it might choke on a GUI.  I haven’t used a Macintosh in years so I can’t speak to it at all.  

I have, however, been following discussions of UI Automation from Microsoft.  This API is specific to Microsoft platforms but takes an interesting approach by calling itself a component of automated test procedures that just happens to provide benefits for accessibility products.  Linking the accessibility API to an automation layer for test tools is clever and may inspire more developers to use it than would if it only existed for the benefit of people with disabilities.

An open source model for accessibility tools and APIs also provides an interesting approach.  If an open source application provides poor accessibility, one, in theory, can go into the source code and add it.  If an open source API isn’t rich enough to expose detailed contextual information then, ostensibly, it can also be added by a lone hacker, a big company or anything in between as long as they are careful to extend without breaking the standard.  Of course, having individuals and companies make changes to an OS level API will require that an AT user install their peculiar flavor of the accessibility layer and, therefore, could easily break programs from other companies.  Incompatible versions of the Java Accessibility Bridge caused some real headaches which I haven’t kept apprised of and don’t know if they are fixed even today.

So, what is the solution that will bring universal accessibility to computer users with a vast array of disabilities who use a variety of different operating systems for a wide range of different purposes?

In the most far reaching, but likely impossible approach, a “telepathy API” that can live in the OS and through very complex AI heuristics, can determine the context in which the user is working, the specific needs of the user and deliver the information to them using the modality they prefer.  I am sure some of my AI friends would argue that they could probably derive all of this from analyzing the semantic information on a computer’s screen but giants like Chomsky, Minsky and Rod Brooks can’t even start hacking away at this solution.

What given the lack of a utopian solution can we hope for and work towards?  

In this area I agree with Peter entirely.  A highly verbose, extraordinarily flexible and cross platform accessibility API would, if adopted by a large enough segment of the application developers or if smart enough to determine a fair amount of context on its own without any special intervention from application developers, that could be easily made to conform with AT products of all kinds will be the best solution.

How, then, do we convince all of the OS developers, all of the application developers and all of the AT companies to join in working toward such a goal if they can claim 508 compliance with the miserable accessibility in most programs today?  I’m not sure if Peter or I might be smart enough to find this answer.  The history I’ve described above is pretty gloomy and I wish I could share Peter’s optimism that a new API would be the silver bullet we need.  I still contend that, to further the cause of accessibility, we need cooperation, outreach, communication and an end to nickel and dime, “me first” solutions provided by AT companies.

Subscribe to the Blind Confidential RSS Feed at: http://feeds.feedburner.com/ Blindconfidential

Outreach: The Key to Communication

On Friday and in some previous posts, I have taken a fairly critical view of the AT industry and the lack of innovation demonstrated there in recent history. While ATIA 2006 disappointed me as I don’t find a pile of new CCTV devices too interesting, people who like such things probably felt overjoyed at the number of new entries into a niche that seemed to have stagnated for a while. Also, this ATIA brought us Code Factory’s Mobile Speak Pocket, with the first ever true touch screen interface design for people with severe and total vision impairments. So, the AT industry shows progress, it just seems slow and driven more by new players than the well healed establishment corporations.

I want, however, to remind Blind Confidential readers that my piece on Friday mentioned a number of different groups who should start sharing ideas to help accelerate innovation but I singled out the AT industry more so than the others. I sincerely believe that the AT companies, who ultimately live and die by the success of products they sell to we blinks should take the leadership position in bringing innovations from the research to the product phase. As I also mentioned on Friday, these companies do not have the same level of resources available to other organizations so, for them to succeed, communication from the research community is essential.

Thus, I start the week contemplating communication and its importance to innovation. All research facilities, corporate, academic or otherwise staff themselves with great minds with big egos who love to publish their findings. The old expression about academia, “publish or perish” remains as true today as ever. Researchers who never announce their results fall into obscurity even within the research community and will often find themselves seeking employment outside of their field. The research community, however, publishes its results in journals and periodicals that AT people don’t read. They also make their presentations at conferences which few AT people attend.

Meanwhile, the AT community meets at its conferences which the scholars rarely attend. One will always find a few professors and graduate students at ATIA, CSUN and the European shows but they can rarely be found in the audience at a demo by Eric Damery or Ben Weiss. Innovations made in the AT industry often get missed by the research community which frequently results in superfluous reinvention.

On numerous occasions, I have discussed a “new” idea with a researcher and found myself saying, “JAWS or ZoomText or Window-Eyes or PAC Mate already can do that.” Astonished, the graduate student starts talking about his innovation and, to stop this senseless waste of our time, I can often open my shoulder bag, pull out a device and demonstrate their concept in action.

Where does the fault for the communication gap fall?

On one hand we have an AT industry that looks inward far more often than to the rest of the world for inspiration. I’ll hear some AT product managers say, “Well no one has done it that way before.” As an explanation for why they do not explore new techniques. At the same time, I hear researchers say, “Well no one showed me the latest version of JAWS so I didn’t know this had already shown up in the market.”

Finally, the users themselves share some of the blame. Many years ago, one of the AT companies hired a fancy market research firm to survey users, trainers and decision makers in the blindness field. This research cost a lot of money and included a large distribution of so called experts in the field. The results seemed bizarre when read by the company who commissioned the study. The results told us that fewer than 2% of all blind people cared about using spreadsheets, presentation tools like PowerPoint, professional database tools like Access and that far less than 1% cared about Macintosh or GNU/Linux boxes. Reading these results both blinks and sighties alike felt perplexed but, after some contemplation, we inferred that if a user’s favorite screen reader, whether JAWS, Window-Eyes or OutSpoken didn’t do a good job in a particular class of applications that the users didn’t feel they needed those programs. It was the classic “chicken and egg” problem – if JAWS didn’t support something, the users didn’t know they wanted it.

How then could an AT company do market research? If the users stated they didn’t want what they already had, what would inspire innovation? How would the users even know if they cared about a new feature if they hadn’t already started using it?

The answer came when some AT companies chose to become a vanguard for innovation and access to an increasingly large number of types of applications. I think Eric Damery deserves much of the credit for this movement as, in his role as JAWS product manager, he brought many of the ideas that the blind FS engineers thought up to the market. Eric’s sense of what will and what will not be useful to a large population of users is uncanny and, unlike the engineers who would make every programming, debugging or hacking tool as cool as possible, brings a sense of what users in workplaces, universities and in recreational settings will actually enjoy. As Eric included support for an increasingly variety of programs, users started trying them out and then started demanding that this support constantly improve. Ben Weiss and the guys at AI^2 did the same for the low vision business and drove magnifiers ever forward.

Competition, of the real sort as opposed to the failure I described in an earlier post, drove the rest of the screen readers and magnifiers to catch up. Window-Eyes always tried to reach parity with JAWS while MAGic forever tried to catch ZoomText. In this way, competition was both fierce and healthy.

Now, however, we find ourselves in a stagnant stage for innovation. JAWS and Window-Eyes add a few new features with each release but none are as dramatic as the virtual buffer introduced in JAWS in 1999 or the first Terminal Services/Citrix Server solution that Window-Eyes brought out a few years back.

Why has innovation slowed?

The AT companies have reached a point where adding support for a particular application offers access to a very small percentage of their user base. It’s hard to find more things to support that the majority of users will find useful and the economics prohibit them from working too hard on obscure products.

What other innovation is necessary?

Screen readers, since the advent of Windows have delivered information through what I will call a second generation or G2 interface (G1 being the text based DOS and GNU/Linux systems). They took information from the screen or from an API and in a serial manner, one syllable or pause at a time, pushed the text out through the speakers. For many years, this interface represented the best one could expect. It is now, however, time for the birth of G3, the third generation of screen reader interfaces.

Ideas for a G3 paradigm resound through the research world. One only needs to take a look at some of the work in audio transformations going on at U. Toronto or McGill up in Canada, Brewster’s work from the UK and a number of other publications from sources as varied as NASA and robotics labs in corporations and universities. We can also play one of the advanced audio games like Shades of Doom and find a three dimensional interface metaphor as rich as any I’ve seen deployed to date.

So, we return to the questions above. What can we do to get the researchers to talk to the gamers to talk to the AT people who can deliver it all to the users?

I suppose I can rant and rave and the few people who read this blog, read my articles published elsewhere or come to hear me do a presentation will hear my position. I can suggest that the AT companies start having their product managers read the scholarly journals and that academics start spending more time at trade shows but I doubt they will listen to me.

Someone, some company, some university or some group needs to start an outreach program. Perhaps SIG Access at the ACM or something at IEEE. Maybe ATIA can start a new ideas SIG. Maybe AFB can start a meeting of the minds resource that the stakeholders can access easily. Maybe we need a university to draw a line in the sand and start collecting all of the stuff that exists into a comprehensive research library.

I’d like to hear what the readers think.

Subscribe to the Blind Confidential RSS Feed at: http://feeds.feedburner.com/ Blindconfidential