Discrimination Against Blind Men Due to a Lack of Standards

A number of years ago, I wrote an article about a pervasive problem in the design of public facilities that directly discriminates against men with vision impairments.  This paper contained footnotes, statements from human subjects, statistical analysis of the distribution of the offending design problems and a number of case studies describing real world problems.  I sent this work off to many different publications involved in blindness issues, universal design, architecture and construction.  None of these publications saw fit to run my item and few even provided me with the courtesy of a response.  As I said, this problem effects men with vision impairments, there may be an analogue among blind women but I have not studied such so, to my female readers, if you have encountered problems similar to those I describe for men below, please write to me so we can expose that discrimination as well.

This blog entry will contain a shorter and less scholarly description of the problem and how it affects blind men.  I will not include citations, footnotes or other aspects of academic publications as this will not be peer reviewed and doesn’t need such.

Description of the problem:

There exists a rarely discussed area of discrimination against blind men resulting from the lack of standardization of urinals and other bathroom fixtures.  The outcome is that men with profound vision impairments often end up with their own bodily fluids and that of others on their bodies and clothing.  This is disgusting, unsanitary and unhealthy.  It is also degrading in the worst possible way.

The manifestation of the problem:

When a blind man enters a public restroom, a place of public accommodation and, therefore, subject to the ADA requirement for reasonable accommodations, for the first time, he knows either the layout of the room or the type of fixtures he may encounter.  If he is in the public restroom alone, he cannot ask anyone where the urinals, sinks or toilets are located.  So, the independent man with profound vision impairment must start swinging his stick around in hopes of hitting porcelain.

When he finally locates the type of fixture he needs, he must then figure out which design of said fixture he has found and adjust him to use it appropriately.  Herein lies the problem, many, if not most, public restrooms are cleaned far less often than one would think necessary to maintain truly sanitary conditions.  Even the most meticulously cleaned restrooms may have encountered a biological disaster shortly before the blind man enters so one must assume that even these might contain hazardous fluids.

Thus, how should a man with profound vision impairment approach the situation?  When the blind man walks toward the urinal he has located with his cane, one of his legs may bump into part of the urinal that extends far beyond the portion that his cane has touched.  This, if a previous user missed the target a bit, may result in a stain on the blind man’s clothing and there is no one who will pay the dry cleaning bill for something that was clearly the fault of others.

Surely, he should not start feeling his way around until he determines the shape of a urinal and the location of the target as this would require his hands to touch potentially dangerous fluids.  I know of no blind men who travel with dispensable rubber gloves and I don’t think that it is right to expect them to do so.  The blind must, in lieu of using his hands, must poke around with his cane to determine the shape of a urinal and then make a best guess as to the target.

With the presumed location of the target in his mind, the blind man must then find a place to lean his cane and then open his pants, take aim and fire away.  Here resides the second problem, the different shapes, heights and sizes of urinals create different splash back patterns and, those that locate their target on the floor, can easily be missed, resulting in peeing on one’s own shoes.  Variants on splash back also arise from different water levels in the urinals, those with a lot of water will have fewer splashes than those that are mostly damp but not filled with fluid.  Thus, if the man with profound vision impairment misses the target (also known as the sweet spot in the lexicon of plumbing) the splash back can be fierce and result in the gentleman ruining a pair of pants.

Once the man with profound vision impairment has finished using the urinal, he must now find the flusher.  This problem also curses toilets.  Groping around in an unsanitary place to find a handle, knob or, in some cases, push button results in acquiring any and all kinds of fluids on one’s hands.  The blind man must then pick up his cane to go off to find the sinks.  Thus, transferring some of the hazardous fluids onto the handle of his cane.

Once he finds the sinks, he must grope around to find the soap and then wash his hands.  He should probably also wash the handle of his cane as it has also been exposed to this bio-hazard.

Finally, the blind man must find the paper towels.  This often results in more groping in an unsanitary place which effectively defeats the purpose of having washed his hands in the first place.  

So, the lack of uniformity in the design of public restrooms and the fixtures installed therein result in one of the most deplorable hidden discriminations against men with profound vision impairments.  A longer exploration of this matter will include non-standard toilet paper dispensers and those racket ball court sized toilet stalls designed for our friends in wheel chairs.

Subscribe to the Blind Confidential RSS Feed at: http://feeds.feedburner.com/ Blindconfidential

Interesting Electronic Mobility Systems

Like Saturdays, I don’t typically post here on Sundays either.  I like to take the weekend off, let new ideas gel and start the week anew on Mondays.  Yesterday, though, a friend sent me two articles about how different mobility systems get used by people with vision impairments.  The first was a comparison between BrailleNote GPS and Trekker from Humanware and StreetTalk from Freedom Scientific.  This review appeared in the February 2006 issue of the Braille Monitor from NFB.  The Braille Monitor article covers only the solutions that were demonstrated at the NFB conference last July.  In the following seven months, another talking GPS system, based upon Wayfinder, that, when used with Talx or Mobile Speak on a Symbian Series 60 cell phone or with Mobile Speak Pocket on a PDA/Phone like the HP 6515 which comes with the GPS receiver builtin, provides an excellent solution for considerably less expense than those from the AT vendors.  The Wayfinder solution, which I use on my HP 6515, is my favorite and I think it is certainly worth checking out.  Wayfinder provides a free five day demo on their web site, the installer works nicely with JAWS so, if you already have a phone running Talx or MS or you have a PDA/Phone with MSP, I suggest you give it a try.

The second article, a press release really, describes a system that sounds someone like Talking Signs but, just to keep things confusing, is also called “Wayfinder.”  Nonetheless, it sounds like our friends in the UK are doing some pretty nifty things with technology in open spaces.

The following article comes to us from Birmingham, England:

Birmingham City Council, UK
Saturday, February 11, 2006

Wayfinder system in Birmingham City Centre for Blind & Partially-Sighted
People

By Press Release

Summary A new facility to help blind and visually impaired people navigate
their way around the heart of Birmingham city centre will be launched in
Spring 2006. The 60 Wayfinder units will be installed around the city
centre, providing users with practical audible information, to confirm their
location and assist them to reach their destination safely.  

Most units are being installed on existing street furniture to minimise
street clutter and, where no street furniture exists, being fixed into new
purpose built stainless steel posts located at the back of footways. Users
will carry a trigger card to activate the speaker unit when within range.
These triggers will be made available in Birmingham’s principal languages.
Details on how and where to obtain the triggers will be available shortly.

The total cost of the Wayfinder scheme is #165,000, #65,000 of which was
recently agreed by Councillor Len Gregory, Cabinet Member for Transportation
& Street Services. Cllr Gregory said; “This is an excellent system,
assisting blind and partially sighted people find their way around
Birmingham city centre. It will help people more easily find transport in
the city, their places of work, shopping venues, public services and visitor
attractions, making Birmingham an even more accessible city”.

The city council has worked in partnership with many other agencies on this
project, including The Royal National Institute of the Blind (RNIB),
Birmingham Focus on Blindness, Guide Dogs for the Blind, Queen Alexandra
College, National Federation for the Blind, BBC Birmingham and others. Many
of these organisations have been represented by people with a visual
impairment.

Rob Legge, Chief Executive, Birmingham Focus on Blindness, said; “Sight loss
is a frightening and traumatic experience that affects almost every aspect
of a person’s life! Our aim is to help the 30,000 children and adults in
Birmingham who have sight loss to achieve a better quality of life.
Wayfinder goes a long way to achieving this. For people with sight
impairment, travelling around the city independently is a major problem, so
Birmingham Focus is delighted to be working with Birmingham City Council and
others on the Wayfinder project.”

Following the launch, the City Council will be encouraging users to give
their views on Wayfinder to enable the system to be fully adapted to their
needs.

Reference Number 8375
Press contact Kathy Williams 0121 303 3764
Issue Date 10 February 2006




http://www.birmingham.gov.uk/GenerateContent?CONTENT_ITEM_ID=76039&CONTENT_I
TEM_TYPE=9&MENU_ID=276



Subscribe to the Blind Confidential RSS Feed at: http://feeds.feedburner.com/ Blindconfidential

Gnome Accessibility and Complex Data Relationships and a Little Extra

I rarely post to Blind Confidential on weekends but, today, I wanted to present a couple of short statements that are actually about this blog and its contents recently.  One will actually include a little more on the API conversation and the other will talk about the perspective of some of my posts.

On the matter of complex relationships and accessibility layers:

This morning, I read the post made by Anonymous about the relationships possible in the new gnome accessibility API.  Before that post, no one had brought this to my attention and I accept the blame for not having done a thorough enough amount of research on the matter before saying that it couldn’t or wouldn’t happen.  The relationships page in the gnome API documentation clearly demonstrates that it can happen and, in fact, people are doing it today.

I still do not know how to motivate application developers to add this to new programs or to retrofit it to the billions of lines of code already out there but, as far as I could read, this API should do the trick quite well.

I may have an answer to my economic argument as well.  This may already be the case as I haven’t spent the time to read up on how the gnome API attaches the relationship facility to the rest of the system but, if the idea doesn’t already exist, I propose that the relationship system be attached to the help system.  In many applications that use complex data relationships, users without disabilities often find themselves lost in the maze of information.  They find that making one change to the data causes a ton of side effects that they didn’t predict.  Microsoft Excel has a pretty nice little window that displays the dependency tree in a spreadsheet but, to some sighted people I’ve asked to look at it, the diagram gets far too complicated to understand in truly massive and highly complex spreadsheets.  I have witnessed sighted and blind users alike struggle with predecessor and dependency relationships in MS Project which could also be simplified by a system like this.

My notion of attaching the relationship facility to the help system will provide an answer for mainstream and AT users alike.  In real time, someone using a project management tool can query, “What will happen if I change this value or break this link?”  Having the relationship tree in a manner that can be delivered sensibly to humans will solve a huge number of problems for AT and be enormously useful to anyone who has done a handful of “what if” changes to a spreadsheet and then cannot figure out why the whole thing has gone kind of nutty.

I feel like a kid at Christmas as this seems to be exactly what I spent so many hours in AT/IT compatibility meetings, Accessibility  Forum meetings and in every other venue where I could speak, banging on tables, insisting on a mechanism to expose complex contextual relationships in applications.  My hat goes off to Peter and the other fine hackers behind the gnome accessibility project.  Please tell me, privately or here, how I can get a demo of this in action.

Why has Blind Confidential been so gloomy lately?

I’ve looked back at posts I’ve put here in the past couple of weeks.  I find that I write far more critical pieces about the past, present and future of technology and people with vision impairments.  I do believe that most of the criticism I’ve presented can be validated and should be remedied.  I also like hearing people point out where I said something false so it can be corrected.  In reality, though, I have a very positive outlook on the future of technologies for we blinks and will post about the more optimistic issues in the future.  I have been very busy with other projects lately and I guess that, in the short time I have to spend writing this blog, I find that negatives come faster or are easier to discuss the cool exciting new stuff in research and technology.

My current life has me working as a freelance writer, an itinerant research science and a self described purveyor of discount wisdom.  This is a very cool way to spend one’s life.  I get exposed to the coolest new research, get to go to the cool egghead conferences where people push ideas rather than sell products and I get to hang out with a lot of really smart people whose agenda is to further science.  If I had been given the opportunity to describe my dream job it would not have come out as good as this.

So, sorry for so much criticism lately.  I’ll try to get more researchy stuff and good news like I posted about the gnome accessibility layer above.

Subscribe to the Blind Confidential RSS Feed at: http://feeds.feedburner.com/ Blindconfidential

AT Products and OS Security

Recently, the discussion on Blind Confidential has grown increasingly esoteric.  I doubt that too many people care as intensely about accessibility API layers and where they live as those of us engaged in the discussion do.  Today, I’ll move onto another topic that will explain to many why the accessibility layer discussion has such great importance and why, ultimately, it must reside at the operating system level and why you should care.

One of the dirty little secrets of the AT industry regards how virtually all screen readers and magnifiers, on screen keyboards and other very important programs used by people with disabilities, at some level, compromise system security.  Some Windows based AT products (none from Freedom Scientific I am happy to say) go so far as to turn off some security settings in the Windows Registry during their installation process and require that they remain off to function properly.  All AT products that work in the login screen and anywhere else that passwords get entered compromise system security.

At this point in time, though, no other techniques exist to make these aspects of a computing environment accessible to people who need to use the type of AT products that hook the keyboard, mouse and/or video systems.  The Macintosh screen reader, on screen keyboard and magnifier are the exceptions in this case as they do not hook video at all and they get information from the keyboard and mouse through the Apple accessibility layer.  Unfortunately, the Apple screen reader isn’t, according to Access World, very good and a professional could not use it in a workplace.  So, this leaves the AT users in a quandary, compromise security or don’t use programs, computers, web sites, files or anything else that requires a password.  Neither of these options provides a good outcome.

What about AT products compromise security?

I will use screen readers as my example but the same problems exist in other types of AT products too.  If a screen reader can say, “star star star…” while the user types in their password and also respond to its built in keystrokes at the same time, it can know your password as it looked at every keystroke to determine if you were issuing a command or typing text.  Fortunately, all of the AT vendors I have ever met, which probably represents a large sample of the business, are overwhelmingly scrupulous people who would never use this information in an illicit manner.  Unfortunately, if the operating systems permit AT products to hook sensitive aspects of the information stream, they also provide the opportunity for the nastiest of the network criminals to do the same thing.  So, to keep computer systems entirely secure, the OS developers need to close off some of the ways AT products have traditionally received information.

Do AT products make my computer any less secure?

The AT programs that change security related registry settings do, in fact, make your system less secure.  I know for certain that JAWS, MAGic, Connect Outloud and Serotek’s Freedom Box System Access do not make such registry changes.  I have not paid close enough attention in the past year to whether or not those that had done this in the past have fixed this problem so I will not name names as they may have remedied the situation already.

Windows based AT programs that do not change these registry settings do not make your system any less secure than any other piece of software.  The techniques used by AT developers to gain access to this information shows up on various hacker oriented web sites around the world with very good documentation as to how one can do these things.

So, the bad guys already know how to do this stuff and we all spend money on virus checkers, spyware eliminators, firewalls and other system security programs to make sure that the work of the nefarious types stays off of our computers.

Is Windows the Only OS Subject to These Problems?

I don’t think so.  The Macintosh probably creates the greatest difficulty for the bad guys but I don’t know enough about the GNU/Linux platform to make a truly informed statement about it.  I will say that the text based, SpeakUp screen access utility that I use on my GNU/Linux box could present a huge security threat in that, to install it you have to modify your operating system kernel which means that your screen reader has access to the lowest level and most dangerous information on your system.  Fortunately, if you make sure you get your SpeakUp distribution from a reputable source (like the project’s own web site) you can be sure that it is safe as the people who maintain that distribution are also users of the software.  Also, open source screen readers expose their source code to the entire world so can, therefore, be inspected by other hackers to make sure that nothing illegal has been added.

Why will a new accessibility API be better?

If the accessibility layer lives at the operating system level, it can enforce the same security constraints on all programs.  By removing direct access to the input and output streams, the operating system itself becomes more secure and, therefore, less prone to invasions by software with criminal intent.  Unfortunately, this means that AT users need to rely upon information delivered by the accessibility API and, if this information is less rich than that which they can get today, it will result in computers becoming less usable by people with vision impairments and probably other disabilities as well.

What does the future hold?

As I know people like Peter from Sun, Rob and others up at Microsoft and Mary Beth and Travis from Apple personally, I can say that they are all working very hard on the next generation of operating system accessibility interfaces.  They are all very smart and highly dedicated people and those who work with them in the various assistive technology groups at OS companies put in everything they can muster to make the next generation as good as they can.  They also accept a fair amount of input from AT and application developers alike and integrate much of this feedback into their designs.

I’m not sure that I will ever find an accessibility layer in an OS to expose everything I want but some of this may be impossible (reference my comment on a telepathy API earlier this week).  The best outcome will happen when the OS, AT and application developers can all meet their needs with a single solution and I hope that through outreach and communication this will happen someday.

What do I mean when I use the word hacker?

I am an old timer in the programming world.  I started programming as a hobby when I was eleven years old when I first got access to a PDP 8 at Lawrence Berkeley Labs and have been hooked on it since.  I turned professional in 1979 and have worked in the field ever since.  Back in the old days, before the uninformed media got hold of our vocabulary, the word “hacker” meant, “very talented and curious programmer type.”  There are good hackers, people like Richard Stallman, who hack for the benefit of the entire world.  There are criminal hackers who use their skills to break into systems and steal money and/or information.  There are tourist hackers who will, illegally, work their way into a secure system just for the challenge of doing so.  The tourists are mostly harmless but are trespassing and, therefore, breaking laws.  

Then, there is the group I dislike more than any of the others; these are what I call the vandals.  It is the vandals who launch worms and viruses just for the sake of messing everyone else up.  It is these vandals who write stupid Outlook scripts to send emails to everyone in your address book.  The vandals aren’t even hackers, if you use the definition I put in above, they are just troublemakers.  The overwhelming majority of vandals have little talent and the nasty programs they set upon the world can usually be written by any high school kid with a copy of Visual BASIC and a little free time.  They are the technical equivalent of kids who throw rocks through windows or spray paint cars.  They are not hackers.

Subscribe to the Blind Confidential RSS Feed at: http://feeds.feedburner.com/ Blindconfidential

Interesting Articles and a Little More on Accessibility APIs

I didn’t have much time to think of a topic for today’s post so, instead, I’ll provide some pointers to a few articles I’ve read recently that I found of particular interest.  At the bottom, I will add a few comments in response to Will’s post on our accessible API discussion as well.

These articles are in no particular order:

The first is about elders and their use of technology products, the Internet and Pocket PC devices.  It is in the UI Design Newsletter and is called Selling older users short.  It debunks various myths about older people and their use of technology products and, in my opinion, shows a promising market for the Code Factory Mobile Speak Pocket product in a very large and mostly untapped market.

Next is an article about a very interesting motor sports event in India titled ” blind navigators show the way” which could be a good application for using StreeTalk from Freedom Scientific, Wayfinder with either MSP or Talx or one of the other talking GPS solutions.  The page containing This article isn’t amazingly accessible but you can find the important spots with a little poking around.

In my sighted days, I truly loved the visual arts.  Those of you who know me well know that I remain active in less visual fine art media like literature, poetry, music and, most recently, I’ve added tactile and audio arts to my interests.  Touch tours are becoming increasingly popular at museums so here are a couple of articles about them, “No longer impossible: blind embrace art and museums welcome blind” and “Museums make art accessible to blind.”

Remaining in the art world, here’s a pretty interesting article on a difficult web page about a blind artist, “Blind artist overcomes challenges.”  I am fairly certain that this is the first article I’ve ever read in the Pocono Record, a publication I had never thought I would ever read.  Isn’t the Internet swell?

Here’s an item from Japan (in English) about a new plastic sheet technology for displaying Braille.  I don’t know anymore about this than what is on this web page including how recent this innovation may be but I thought it was apropos to the discussion about haptics going on here lately.

Special thanks to Professor William Mann, Eric Hicks and, most especially, Lisa Yayla, the owner and unofficial research librarian of the Adaptive Graphics mailing list hosted on freelists.org for sending me these pointers.

Back to APIs

Yesterday, Will Pearson posted two very well considered comments.  As I had guessed, he had some very valuable things to add to my deaf-blind posting and his ideas on accessibility APIs are also well founded.

I agree that for generic information building accessibility into the user interface library would solve many, even most accessibility problems.  While Microsoft did not build MSAA into MFC (the popular C++ library) they, instead, chose to put at a lower level, in the common control layer.  This decision demonstrated some very good outcomes but only in applications that used standard controls.  Putting MSAA a level up in MFC would have solved the problem for some custom controls used in MFC applications but would have done absolutely nothing for Win32 applications or programs written using a different set of foundation classes for their UI that employed standard controls.  So, Microsoft solved some of the problems by providing support for all applications that used standard controls, written using MFC or not but relied upon the application developers to add MSAA to controls that diverged from the standard.  

Unfortunately, most Windows applications, written using MFC, WTL or some other library, use some to many inaccessible custom controls.  Also, a major problem for accessibility APIs as we look to the future are the applications that use proprietary, cross-platform UI libraries.  

Tom Tom, the popular GPS program, is one example of how a proprietary, cross-platform UI library will render their application completely inaccessible.  If someone installs Tom Tom on an iPAQ running MSP or on a PAC Mate they will find that the screen reader will only be able to “see” some window titles and an occasional control.  Tom Tom, to maintain a uniform visual look and feel across all of the platforms they support (TT runs on Windows Mobile, Palm OS, Symbian, iPod to name a few) they have created their own, completely inaccessible UI library.  Tom Tom doesn’t even load standard fonts from the OS but, rather, builds a font library into their software.  This permits them to keep their trademark appearance consistent on all platforms but completely destroys the possibility of any screen reader gaining access to their information.  (Off topic: if you need a portable talking GPS solution, buy Wayfinder or StreeTalk as they work very well.  Wayfinder, from the mainstream, is much cheaper than Tom Tom and StreetTalk is less expensive than the others designed specifically for blind users).  So, even if an accessibility API existed on the platforms where Tom Tom runs and it was at the class library or user interface level, it wouldn’t work.

The combination of cross platform development and the desire to have a unique look and feel cause two of my lasting fears for the next generation of accessibility APIs – especially when we factor in the labor costs of retrofitting a new, even if cross-platform, user interface library to the billions of lines of code already deployed around the world.

Moving from the pragmatic and returning to the delivery of contextually interesting semantic information, I have yet to see how a generic control can have enough knowledge of its purpose to deliver truly useful information about what it is doing at any given point of time.  A button control, a table control, a list box control or a tree view control to name a few, don’t understand what they contain nor why they are containing it.

I’ll return to our Visio organization chart example.  Let’s imagine a very simple box with five names in them, Will, Chris, Peter, Eric and Ted.  Because Ted is a hall of famer, we’ll put him at the top and because Eric and Chris are managers, we’ll have them report to Ted.  So, our Ted box has two arrows coming from it: one to the Chris box and the other to the Eric box.  Because Will is a hacker, he will report to Chris directly, so we’ll add an arrow from Chris to Will.  As Peter is an ideas guy and a hacker, he will report directly to Eric but indirectly to Chris and Ted, so we’ll add a solid arrow from Eric to Peter and dotted arrows from Ted and Chris to Peter as well.  Now, just to make matters interesting, we’ve decided that the ideas guys get to set priorities so Peter and Eric will have dotted lines pointing to Chris as he must have the engineers build what they design.

Our organization has six boxes, one for each person and the bounding box that contains the members.  If we assume that our accessibility API is extensive enough to include a rectangle control that understands that it might also be a container and a line control that knows its attributes (dotted, solid, etc.) we still do not have enough information to describe the relationships between the boxes unless the application itself provides a lot of supplementary information about the meaning of boxes and lines as they are used in said application.  We can derive this information from the Visio object model but not from a generic collection of controls at any level below the application itself.

Peter suggested that some hybrid might also be a good idea where the AT product gets most of its information from the accessibility API and the truly application specific information from the actual application.  I still think that this requires that the application developer do a fair amount of work to expose this information in a usable manner.

Subscribe to the Blind Confidential RSS Feed at: http://feeds.feedburner.com/ Blindconfidential

G3 Interfaces and Deaf-Blind Users

Yesterday, Chris Westbrook, a fellow I know from various mailing lists and one whom I think quite often has interesting things to say asked about research into and the potential efficacy of a 3D audio interface for people with both vision and hearing impairments.  Until then, I hadn’t considered deaf-blind people in my analysis of how to improve the efficiency of screen reader users.  Deaf-blindness is not my area of expertise and, fortunately, is a relatively low incidence disability.  Our deaf-blind friends do deserve the best access technology that the research and AT world can develop for their use and I will try to take a stab at addressing some issues that deaf-blind people might encounter and how their screen reading experience can be improved.  As I said, though, I cannot speak with much authority on this subject so please send me comments and pointers to articles so I can learn more.

Before I jump into a pontification on my views of technology for deaf-blind users in the future, I want to relate an amusing anecdote about an incident that occurred involving me at CSUN 2004.  

CSUN takes place every march at the convention center hotels at the Los Angeles Airport (LAX).  That year, Freedom Scientific demonstrated some of the first PAC Mate features designed for use by deaf-blind people.  I stayed at the Marriott that year and, as my daily routine dictated, I stood on line at the Starbuck’s in the lobby seeking my triple shot vente late.  While waiting and chatting with Jamal Nazrui, who was in line in front of me, I felt a tap on my shoulder and turned to face the person who wanted my attention.  As soon as I turned around, a pair of arms enveloped me in a very nice hug.  By the location of the anatomical parts of this affectionate person, I could tell immediately that she was very definitely a woman.  Then, to my surprise, a very deep male voice with Scottish accent started talking.  I, somewhat startled, thought that I was either in the embrace of the recipient of the most successful sex change operation ever or that I had underestimated the depth of Scottish women’s voices.  Then, my human skills to understand context kicked in as I, still in this very pleasant embrace, heard the speaker tell me how “she” greatly appreciated the terrific effort we made in the PAC Mate to add features that deaf-blind people could use to communicate with each other.  The recognition that it was an interpreter talking changed my perspective greatly.

The day before, Brad Davis, VP of Hardware Product Management at FS, brought four deaf-blind people into the Freedom Scientific booth on the trade show floor.  He gave each of them a PAC Mate that had a wireless connection established with the network.  He showed these people how to launch MS Messenger and they started a chat among themselves.  Ordinarily, blind people with profound hearing impairments need to communicate with each other by using sign language by touch in the palm of the other person’s hand.  Thus, it remains very difficult for more than two deaf-blind people to hold an efficient conversation together.  With a PAC Mate in the Freedom Scientific booth that day, four deaf-blind people held the first ever conversation of its kind.  The happiness all of them displayed with this new tool brought all FS people around one of the greatest senses of satisfaction with our work that I can remember experiencing.

Freedom Scientific has, since that day, gone onto add a number of additional PAC Mate related products designed for use by deaf-blind people to their catalogue.  I don’t know much about these products so go to the FS web site to learn more about them.

Now, back to the topic at hand, how can a G3 interface be designed to improve the efficiency of deaf-blind users?

Again, I’m pretty much guessing here but I do have a few ideas.  I’ll start with the work my friends at ViewPlus are doing with tactile imaging and add a bit of Will Pearson’s work on haptics.  Today, these solutions carry fairly hefty price tags as is the case for most AT hardware.  They can, however, deliver a lot more information through their two and three dimensional expressions of semantic information than can be delivered through a single line Braille display.

A person can use a tactile image with both hands and, therefore, can, by knowing the distance between their hands, determine the information inherent in the different sizes of objects, the placement of objects in relation to each other and the “density” of the object from feeling various attributes that it can contain.

Thus, a person using a tactile pie chart can feel the different sizes of the items in the graphic and, far more easily than listening to JAWS read the labels and the values sequentially, learn the information in the chart through the use of more than one dimension.  This idea can also be applied to document maps, road maps and far more types of information than I can imagine at this moment.

Here, however, is where my ignorance stops me entirely from moving any further.  I can make some wild guesses as to how a force feedback haptic device might be applied to improve efficiency but I cannot do so with any authority at all.  I can’t even recall reading a single article on the topic of deaf-blind people and next generation interfaces.  Sorry for my lack of knowledge in this area and, as I stated above, please send me pointers to places where I can learn more.

Why is JAWS Used for My Examples?

JAWS is the screen reader I know the best.  It is the screen reader I use on a daily basis and, in my opinion; it is the best and most comprehensive screen reader available today.  No other screen access tool can deliver even close to the amount of contextual information as can JAWS.  Most other screen readers also entirely ignore the more advanced features of programs like Word, Excel, PowerPoint, Project and others which I need to do my job.  Without the broad range of access that JAWS delivers, many blind professionals would never have been able to get promotions to better positions as they could not use the tools that their sighted counterparts do in the same workplace.  

I can be very critical of all of the G2 screen readers because I study all of them and because I rely on them to do my job.  I hope that my criticism is seen as constructive as I think the screen reader companies (FS, Dolphin, GW Micro, Serotek and Code Factory) as well as those who work on gnome accessibility, GNU/Linux accessibility, Java accessibility and that peculiar little screen reader for Macintosh are all pushing access for we blinks in the right direction.  If I find fault with the products or businesses, I will say so as I feel that is my duty to my readers and to the blind community at large.  I do so with the intent that these fine organizations make improvements rather than to tear them down in any way.

Subscribe to the Blind Confidential RSS Feed at: http://feeds.feedburner.com/ Blindconfidential

Pros and Cons of Accessibility APIs: Notes on Peter Korn’s Blog Post of 2/6

Since starting this blog, I have discussed a number of different AT products that run on the Microsoft Windows platform almost to the exclusion of Macintosh, GNU/Linux and any other operating environment that a person with a vision impairment might want to use.  I’ve also made mention of MSAA and the upcoming UI Automation (both Microsoft technologies) to the exclusion of the gnome accessibility project and the Macintosh Accessibility API for Apple’s OSX platform.  This exclusion stems from my own knowledge, which is almost entirely about Windows, and because Windows is the overwhelmingly dominant operating environment used in places of employment around the world.  I also entirely ignore AT products outside of those for people with vision impairments as I have virtually no expertise (outside of conversations with some vendors of such products who are also personal friends) in that area.

Peter Korn, Sun Microsystems accessibility architect and long time accessibility advocate, in his February 6 blog entry, makes note of this and offers a different opinion as to the cause of the continued accessibility problems that I described in the posting I wrote about the failure of competition in the AT industry.  Peter, as often is the case, brings out some very good points, some with which I agree and others, like the failure of competition to deliver better solutions to AT users, with which I disagree.  Peter also mentions my near total exclusion of operating environments from vendors other than Microsoft which I do hope to correct in this post and in others to follow.

Peter correctly states that in a later post about contextual information, I point out that MSAA, the accessibility API for Windows didn’t offer enough to make mainstream applications truly accessible and he goes on to say that a richer accessible API, like that being used by the gnome accessibility for GNU/Linux platforms, the Macintosh accessibility API for OSX and Microsoft’s upcoming UI Automation API will do a much better job.  I agree entirely with the assertion that better accessibility APIs built into the OS will make support for individual applications much easier and that AT companies will not need as many consulting dollars to make a specific program that is compliant with such an API accessible to its users.  I also agree that a truly rich API will increase the kind of competition among AT vendors that will benefit users by a healthy amount.  I also agree that such an API will also make it possible for new and smaller players to succeed in the tight AT marketplace.

I disagree, though, that the breaking of ranks among the AT vendors (see my earlier post on competition or yesterday’s post on Peter’s blog) didn’t largely give mainstream software vendors a free pass to providing substandard accessibility solutions while claiming 508 compliance in their VPAT.  Specifically, the break in ranks among the vision related AT companies caused chaos among the mainstream vendors in their decision to adopt a standard API or not.  GW Micro argued that MSAA provided enough and pushed mainstream vendors to use it, Freedom Scientific quite publicly took a nearly anti-MSAA stance (due to the inadequacies of the API) and promoted the exposure of information through a document object model (DOM), like that in the MS Office suite of products, Corel Office products and Firefox.  FS promoted the DOM described by the WC3 Web Accessibility Initiative (W3C/WAI) as the solution to Internet accessibility.  While DOM solutions require a lot of customization to support each application, they also provide a profoundly large amount of information about the data being exposed.  Recently, Serotek, in Freedom Box System Access and GW Micro in more recent Window-Eyes releases have adopted support for the DOM concept in their versions of internet and Microsoft Word Access and both have received acclaim from their users for doing so.

Taking a pure DOM approach to accessibility does have many faults.  As I state in the previous paragraph, each application may have a different model and to provide contextual information (like the relationship between two random cells in a spreadsheet or the relationship between multiple boxes in an organization chart) will require customization to deliver such information to the user.  Next, as Peter correctly asserts, most application specific object model solutions are also operating system specific.  Thus, an application developer that wants to make their multi-platform solution accessible must do custom work to support Windows, GNU/Linux and Macintosh separately.  

In a perfect world, a cross platform, rich accessibility API would be the ideal solution.  Unfortunately, we do not live in a perfect world.  Returning to the negatively valued competition between the Windows based AT companies, the chaos and historic lack of cross platform compatibility of the accessibility APIs caused many mainstream application developers to choose to do nothing regarding accessibility and place the onus of making applications that do not conform with any standard entirely on the shoulders of AT companies.  Thus, in many cases, only the wealthiest of the AT companies could provide even a passable solution – a reality that only gave benefit to the large application developer who put “screen reader compatibility” on their VPAT.

Furthermore, many application developers, large and small, who only have products on the Windows platforms refused to add MSAA or DOM support because of the enormous cost of retrofitting an accessibility API to tens of millions of lines of source code.  Huge corporations simply said “no” to do anything to their products to promote accessibility.  They refused to add MSAA, they refused to add support for the Java Accessibility API, they refused to add a DOM, they refused to even follow the W3C/WAI guidelines on their web sites and, yes, they refused to hire the relatively tiny AT companies to help them do any of this.

Furthering the argument that AT companies will work around compliance with standards to both improve the experience for their users and to, intentionally or not, give the free pass to the big boys is demonstrated by the back flips and hoop jumping that all credible screen readers do to work around web sites that conform poorly with the W3C/WAI guidelines and the Section 508 standards.  The WAI guidelines have been around for a long enough time to have permeated the consciousness of large corporations.  Web accessibility validation and repair tools have existed for a fairly long time (in software years) so big companies can even automate a large portion of making their sites accessible.  Unfortunately, even with loud complaints from individual users, organizations like NFB and AT companies, compliance with accessibility standards seems nearly impossible to find a place in the minds of corporate web developers.  How then can we expect these same companies to retrofit millions of lines of source code to comply with an accessibility API?  What would motivate these same big corporations to comply with accessibility standards if the vendors of the most popular AT products provide them with techniques to work around their accessibility problems?  Where is the profit motive for a large software vendor to modify their products to comply with an API if they can state that they are compliant with a screen reader and continue to get millions of dollars of contracts from the Federal government?

I cannot speak to either the gnome or Macintosh accessibility efforts until I do some more research.  I do have a GNU/Linux box in my home but it does not run gnome but, rather, uses emacspeak and SpeakUp in text based consoles.  It’s only an old Gateway with 128 mb of RAM and a 450 mhz processor so I think it might choke on a GUI.  I haven’t used a Macintosh in years so I can’t speak to it at all.  

I have, however, been following discussions of UI Automation from Microsoft.  This API is specific to Microsoft platforms but takes an interesting approach by calling itself a component of automated test procedures that just happens to provide benefits for accessibility products.  Linking the accessibility API to an automation layer for test tools is clever and may inspire more developers to use it than would if it only existed for the benefit of people with disabilities.

An open source model for accessibility tools and APIs also provides an interesting approach.  If an open source application provides poor accessibility, one, in theory, can go into the source code and add it.  If an open source API isn’t rich enough to expose detailed contextual information then, ostensibly, it can also be added by a lone hacker, a big company or anything in between as long as they are careful to extend without breaking the standard.  Of course, having individuals and companies make changes to an OS level API will require that an AT user install their peculiar flavor of the accessibility layer and, therefore, could easily break programs from other companies.  Incompatible versions of the Java Accessibility Bridge caused some real headaches which I haven’t kept apprised of and don’t know if they are fixed even today.

So, what is the solution that will bring universal accessibility to computer users with a vast array of disabilities who use a variety of different operating systems for a wide range of different purposes?

In the most far reaching, but likely impossible approach, a “telepathy API” that can live in the OS and through very complex AI heuristics, can determine the context in which the user is working, the specific needs of the user and deliver the information to them using the modality they prefer.  I am sure some of my AI friends would argue that they could probably derive all of this from analyzing the semantic information on a computer’s screen but giants like Chomsky, Minsky and Rod Brooks can’t even start hacking away at this solution.

What given the lack of a utopian solution can we hope for and work towards?  

In this area I agree with Peter entirely.  A highly verbose, extraordinarily flexible and cross platform accessibility API would, if adopted by a large enough segment of the application developers or if smart enough to determine a fair amount of context on its own without any special intervention from application developers, that could be easily made to conform with AT products of all kinds will be the best solution.

How, then, do we convince all of the OS developers, all of the application developers and all of the AT companies to join in working toward such a goal if they can claim 508 compliance with the miserable accessibility in most programs today?  I’m not sure if Peter or I might be smart enough to find this answer.  The history I’ve described above is pretty gloomy and I wish I could share Peter’s optimism that a new API would be the silver bullet we need.  I still contend that, to further the cause of accessibility, we need cooperation, outreach, communication and an end to nickel and dime, “me first” solutions provided by AT companies.

Subscribe to the Blind Confidential RSS Feed at: http://feeds.feedburner.com/ Blindconfidential

Outreach: The Key to Communication

On Friday and in some previous posts, I have taken a fairly critical view of the AT industry and the lack of innovation demonstrated there in recent history. While ATIA 2006 disappointed me as I don’t find a pile of new CCTV devices too interesting, people who like such things probably felt overjoyed at the number of new entries into a niche that seemed to have stagnated for a while. Also, this ATIA brought us Code Factory’s Mobile Speak Pocket, with the first ever true touch screen interface design for people with severe and total vision impairments. So, the AT industry shows progress, it just seems slow and driven more by new players than the well healed establishment corporations.

I want, however, to remind Blind Confidential readers that my piece on Friday mentioned a number of different groups who should start sharing ideas to help accelerate innovation but I singled out the AT industry more so than the others. I sincerely believe that the AT companies, who ultimately live and die by the success of products they sell to we blinks should take the leadership position in bringing innovations from the research to the product phase. As I also mentioned on Friday, these companies do not have the same level of resources available to other organizations so, for them to succeed, communication from the research community is essential.

Thus, I start the week contemplating communication and its importance to innovation. All research facilities, corporate, academic or otherwise staff themselves with great minds with big egos who love to publish their findings. The old expression about academia, “publish or perish” remains as true today as ever. Researchers who never announce their results fall into obscurity even within the research community and will often find themselves seeking employment outside of their field. The research community, however, publishes its results in journals and periodicals that AT people don’t read. They also make their presentations at conferences which few AT people attend.

Meanwhile, the AT community meets at its conferences which the scholars rarely attend. One will always find a few professors and graduate students at ATIA, CSUN and the European shows but they can rarely be found in the audience at a demo by Eric Damery or Ben Weiss. Innovations made in the AT industry often get missed by the research community which frequently results in superfluous reinvention.

On numerous occasions, I have discussed a “new” idea with a researcher and found myself saying, “JAWS or ZoomText or Window-Eyes or PAC Mate already can do that.” Astonished, the graduate student starts talking about his innovation and, to stop this senseless waste of our time, I can often open my shoulder bag, pull out a device and demonstrate their concept in action.

Where does the fault for the communication gap fall?

On one hand we have an AT industry that looks inward far more often than to the rest of the world for inspiration. I’ll hear some AT product managers say, “Well no one has done it that way before.” As an explanation for why they do not explore new techniques. At the same time, I hear researchers say, “Well no one showed me the latest version of JAWS so I didn’t know this had already shown up in the market.”

Finally, the users themselves share some of the blame. Many years ago, one of the AT companies hired a fancy market research firm to survey users, trainers and decision makers in the blindness field. This research cost a lot of money and included a large distribution of so called experts in the field. The results seemed bizarre when read by the company who commissioned the study. The results told us that fewer than 2% of all blind people cared about using spreadsheets, presentation tools like PowerPoint, professional database tools like Access and that far less than 1% cared about Macintosh or GNU/Linux boxes. Reading these results both blinks and sighties alike felt perplexed but, after some contemplation, we inferred that if a user’s favorite screen reader, whether JAWS, Window-Eyes or OutSpoken didn’t do a good job in a particular class of applications that the users didn’t feel they needed those programs. It was the classic “chicken and egg” problem – if JAWS didn’t support something, the users didn’t know they wanted it.

How then could an AT company do market research? If the users stated they didn’t want what they already had, what would inspire innovation? How would the users even know if they cared about a new feature if they hadn’t already started using it?

The answer came when some AT companies chose to become a vanguard for innovation and access to an increasingly large number of types of applications. I think Eric Damery deserves much of the credit for this movement as, in his role as JAWS product manager, he brought many of the ideas that the blind FS engineers thought up to the market. Eric’s sense of what will and what will not be useful to a large population of users is uncanny and, unlike the engineers who would make every programming, debugging or hacking tool as cool as possible, brings a sense of what users in workplaces, universities and in recreational settings will actually enjoy. As Eric included support for an increasingly variety of programs, users started trying them out and then started demanding that this support constantly improve. Ben Weiss and the guys at AI^2 did the same for the low vision business and drove magnifiers ever forward.

Competition, of the real sort as opposed to the failure I described in an earlier post, drove the rest of the screen readers and magnifiers to catch up. Window-Eyes always tried to reach parity with JAWS while MAGic forever tried to catch ZoomText. In this way, competition was both fierce and healthy.

Now, however, we find ourselves in a stagnant stage for innovation. JAWS and Window-Eyes add a few new features with each release but none are as dramatic as the virtual buffer introduced in JAWS in 1999 or the first Terminal Services/Citrix Server solution that Window-Eyes brought out a few years back.

Why has innovation slowed?

The AT companies have reached a point where adding support for a particular application offers access to a very small percentage of their user base. It’s hard to find more things to support that the majority of users will find useful and the economics prohibit them from working too hard on obscure products.

What other innovation is necessary?

Screen readers, since the advent of Windows have delivered information through what I will call a second generation or G2 interface (G1 being the text based DOS and GNU/Linux systems). They took information from the screen or from an API and in a serial manner, one syllable or pause at a time, pushed the text out through the speakers. For many years, this interface represented the best one could expect. It is now, however, time for the birth of G3, the third generation of screen reader interfaces.

Ideas for a G3 paradigm resound through the research world. One only needs to take a look at some of the work in audio transformations going on at U. Toronto or McGill up in Canada, Brewster’s work from the UK and a number of other publications from sources as varied as NASA and robotics labs in corporations and universities. We can also play one of the advanced audio games like Shades of Doom and find a three dimensional interface metaphor as rich as any I’ve seen deployed to date.

So, we return to the questions above. What can we do to get the researchers to talk to the gamers to talk to the AT people who can deliver it all to the users?

I suppose I can rant and rave and the few people who read this blog, read my articles published elsewhere or come to hear me do a presentation will hear my position. I can suggest that the AT companies start having their product managers read the scholarly journals and that academics start spending more time at trade shows but I doubt they will listen to me.

Someone, some company, some university or some group needs to start an outreach program. Perhaps SIG Access at the ACM or something at IEEE. Maybe ATIA can start a new ideas SIG. Maybe AFB can start a meeting of the minds resource that the stakeholders can access easily. Maybe we need a university to draw a line in the sand and start collecting all of the stuff that exists into a comprehensive research library.

I’d like to hear what the readers think.

Subscribe to the Blind Confidential RSS Feed at: http://feeds.feedburner.com/ Blindconfidential

Communication: The Key to Innovation

Being in attendance at a scholarly conference like ICADI reminds me that those of us who design and deploy technologies for people with vision impairments can improve the efficiency with which we communicate our ideas.  I’ve identified a number of groups who, for all intents and purposes, operate in isolation from each other and, therefore, do not benefit from the discoveries made by those in the other groups.  All inventors stand on the shoulders of giants and build on the cumulative body of knowledge.  This, unfortunately, doesn’t happen too often with efforts for people with vision impairments.

The distinct groups I’ve identified include: AT companies, businesses like FS, GW Micro, etc. who make the commercial AT products;  academics who do a lot of great research that rarely gets incorporated into AT products;  audio game developers who make what might be the most interesting advances in UI for people with vision impairments but are mostly ignored by those outside their community and, finally, the lone hackers who come up with cool ideas and spread them around their collection of friends but end up seeing their ideas fade into obscurity without the funding of a university, corporation or small business.  One other group that stands out as oft ignored are the huge companies that manufacture technology hardware, for the most part, hardware companies have some kind of accessibility group but the rate of technology transfer is so slow that many concepts never find there way into AT products.

I have an article coming out in next month’s Access World which addresses one of these issues but I think we blinks need to start pushing for summit meetings that include representatives of all of these groups plus other stakeholders.  Blind people make up a small minority of the population at large.  Thus, the ratio between research dollars and return on investment may scare a lot of people from heading down this path.  Thus, maximizing the efficiency with which this research can happen will make each dollar spent go much further.

Compared to mainstream technology companies, Freedom Scientific and Humanware, the two biggest in the blindness biz, seem puny.  Microsoft makes in less than a week what the entire collection of companies making products for people with vision impairments make in a year.  Microsoft can, therefore, more readily afford to support research efforts and, in fact they do.  So does IBM, Sun Microsystems, Hewlett-Packard and other civic minded corporations.  Unfortunately, the AT companies rarely have the time to read the results of the research and tend toward reluctance when it comes to making radical innovations.  The AT industry, even with its millions of dollars, really has little ability to afford long term projects and a lot of forward thinking that may not pay off later.  The can, however, spend more time listening and learning from those who do innovate and incorporate the items perceived to be marketable into their products.

The audio game hackers live in a world all their own.  The AT people ignore them with the thought that no one can make money building $30 products for blind people.  The academics ignore them more out of lack of knowledge of their existence than anything else (it is hard to have a large presence when you make $30 products for blinks) and the hardware companies tend to think of the workplace first and don’t want to spend time looking at a niche within a niche.

How then can we get all of these groups talking?  I’m hoping this blog gains some more popularity and such things can be discussed here.  I also think that industry organizations like ATIA should look past mere compatibility and start working toward an economic model for innovation that all of these groups will buy into.  Innovation is the key to success in the future and cannot be ignored, let’s all work together to find a way that both competition and innovation can co-exist comfortably.

Subscribe to the Blind Confidential RSS Feed at: http://feeds.feedburner.com/ Blindconfidential

Screen Readers and Contextual Information

The topic of contextual information in user interface and the lack thereof in screen reader interfaces creates a lot of controversy and discussion among people who think about such things.  A few years ago, ATIA held an AT/IT compatibility meeting which kicked off its AT/IT compatibility committee as well as its AT/AT compatibility committee.  The membership of the AT/IT group contained people from mainstream companies (Microsoft, IBM, Adobe, etc.) and AT companies (FS, AI^2, Medentech and others).  Its agenda included helping define new methods of communicating between AT products and mainstream software packages in order to improve the user experience.

Well, a few years have passed, lots of new versions of JAWS, Window-Eyes and HAL have shipped.  A new version of MS Office (2003) went out the door and these releases demonstrate little progress in bringing the screen reader user more contextual information.  Apple has released its screen reader and Microsoft’s next OS release will contain a new API called “UI Automation” which claims to hold enough information to move the art forward.  Through a lot of terrific work by our friends at Sun Microsystems and IBM, a new AT/IT compatibility protocol was published by the AT/IT committee that looks a lot like the gnome accessibility API with some extensions and, to my knowledge, hasn’t been used in any product anywhere.

Other than direct “screen scraping” (taking information written to the screen and delivering it verbatim to the user) the two most commonly used techniques for getting data out of an application and delivering it to a blind user have been MSAA and Microsoft’s Object Model interface.  As a Microsoft manager once said, “an MSAA object is like an amoeba, it knows a bit about itself and its purpose but nothing about its surroundings,” which makes the context problem impossible to solve using MSAA as the soul source of information.  The MS Object Model provides much more information about context but remains so limited that a screen reader cannot do much to provide its users with information rich enough so they can perform at an even level with their sighted counterparts.

What do I mean by contextual information?

When a sighted person views a spreadsheet, they can focus their attention on a specific cell but, by seeing the surrounding cells, they can infer certain pieces of information about the cell they care most about.  A screen reader user with JAWS can learn a lot about the cell and its context by using some of the more advanced JAWS features but the limits upon them slow their data gathering much more so than a person who, by diverting their gaze, can learn a lot about a single data point.  The reason for using JAWS as the example is because it leads the pack in delivering contextual information in important applications, I will criticize it a bit in this article only by suggesting that more can be done in the future.  Window-Eyes and Freedom Box System Access have both done some catching up in this area in word processors and should be encouraged to continue to innovate as well.

Screen readers present information in a one dimensional manner.  They either deliver a long string of syllables through a speech synthesizer or deliver a few characters on a single line of a refreshable Braille display.  Multi-line Braille displays are cost prohibitive and have made few inroads into the market.  The ViewPlus tactile display, while expensive, does an amazing job of delivering contextual information and, for users who can gain access to such products, it may be the feel of the future.

How can screen readers improve?

Three dimensional audio is now cheap and ubiquitous.  On Tuesday, I paid $99 for a Creative Labs audio card for my laptop that supports all of the latest Direct X features, Dolby 5.1 & 7.1 as well as a few other goodies.  So, as I have mentioned before, the screen reader vendors should start researching two and three dimensional interfaces in audio and, using tactile displays like the one from ViewPlus, two dimensional touch interfaces.

Such interfaces will make using products we blinks already have, like word processors and spreadsheets more efficient and will open up accessibility to very graphical programs like Microsoft Visio and VisualStudio.

Why is this important?

Because we blinks need to compete in the workplace.  We desire career advancements and hope to move up in organizations.  Today, it is impossible to provide “reasonable accommodations” to programs from which screen readers cannot gather enough information to make them usable.  The technology is available in Direct X, in the tactile displays from ViewPlus and in Microsoft’s Object Models.  Now, we need the AT companies to be creative and discover ways to deliver it to us.  JAWS has set a pretty high bar with the information it can expose in Word, Excel, PowerPoint and MS Project but still has a way to go.  The other players need to catch up to JAWS and then start considering adding more dimensions to the reality of screen reader users like me.  This is one of my real hot button topics so expect to read more about contextual information and multi-dimensional interfaces at Blind Confidential in the future.

Now, back to the ICADI conference to learn more about smart homes and how we blinks can use them to improve our standard of living.  Thus far, this conference has been very cool.

Subscribe to the Blind Confidential RSS Feed at: http://feeds.feedburner.com/ Blindconfidential