I didn’t have much time to think of a topic for today’s post so, instead, I’ll provide some pointers to a few articles I’ve read recently that I found of particular interest. At the bottom, I will add a few comments in response to Will’s post on our accessible API discussion as well.
These articles are in no particular order:
The first is about elders and their use of technology products, the Internet and Pocket PC devices. It is in the UI Design Newsletter and is called Selling older users short. It debunks various myths about older people and their use of technology products and, in my opinion, shows a promising market for the Code Factory Mobile Speak Pocket product in a very large and mostly untapped market.
Next is an article about a very interesting motor sports event in India titled ” blind navigators show the way” which could be a good application for using StreeTalk from Freedom Scientific, Wayfinder with either MSP or Talx or one of the other talking GPS solutions. The page containing This article isn’t amazingly accessible but you can find the important spots with a little poking around.
In my sighted days, I truly loved the visual arts. Those of you who know me well know that I remain active in less visual fine art media like literature, poetry, music and, most recently, I’ve added tactile and audio arts to my interests. Touch tours are becoming increasingly popular at museums so here are a couple of articles about them, “No longer impossible: blind embrace art and museums welcome blind” and “Museums make art accessible to blind.”
Remaining in the art world, here’s a pretty interesting article on a difficult web page about a blind artist, “Blind artist overcomes challenges.” I am fairly certain that this is the first article I’ve ever read in the Pocono Record, a publication I had never thought I would ever read. Isn’t the Internet swell?
Here’s an item from Japan (in English) about a new plastic sheet technology for displaying Braille. I don’t know anymore about this than what is on this web page including how recent this innovation may be but I thought it was apropos to the discussion about haptics going on here lately.
Special thanks to Professor William Mann, Eric Hicks and, most especially, Lisa Yayla, the owner and unofficial research librarian of the Adaptive Graphics mailing list hosted on freelists.org for sending me these pointers.
Back to APIs
Yesterday, Will Pearson posted two very well considered comments. As I had guessed, he had some very valuable things to add to my deaf-blind posting and his ideas on accessibility APIs are also well founded.
I agree that for generic information building accessibility into the user interface library would solve many, even most accessibility problems. While Microsoft did not build MSAA into MFC (the popular C++ library) they, instead, chose to put at a lower level, in the common control layer. This decision demonstrated some very good outcomes but only in applications that used standard controls. Putting MSAA a level up in MFC would have solved the problem for some custom controls used in MFC applications but would have done absolutely nothing for Win32 applications or programs written using a different set of foundation classes for their UI that employed standard controls. So, Microsoft solved some of the problems by providing support for all applications that used standard controls, written using MFC or not but relied upon the application developers to add MSAA to controls that diverged from the standard.
Unfortunately, most Windows applications, written using MFC, WTL or some other library, use some to many inaccessible custom controls. Also, a major problem for accessibility APIs as we look to the future are the applications that use proprietary, cross-platform UI libraries.
Tom Tom, the popular GPS program, is one example of how a proprietary, cross-platform UI library will render their application completely inaccessible. If someone installs Tom Tom on an iPAQ running MSP or on a PAC Mate they will find that the screen reader will only be able to “see” some window titles and an occasional control. Tom Tom, to maintain a uniform visual look and feel across all of the platforms they support (TT runs on Windows Mobile, Palm OS, Symbian, iPod to name a few) they have created their own, completely inaccessible UI library. Tom Tom doesn’t even load standard fonts from the OS but, rather, builds a font library into their software. This permits them to keep their trademark appearance consistent on all platforms but completely destroys the possibility of any screen reader gaining access to their information. (Off topic: if you need a portable talking GPS solution, buy Wayfinder or StreeTalk as they work very well. Wayfinder, from the mainstream, is much cheaper than Tom Tom and StreetTalk is less expensive than the others designed specifically for blind users). So, even if an accessibility API existed on the platforms where Tom Tom runs and it was at the class library or user interface level, it wouldn’t work.
The combination of cross platform development and the desire to have a unique look and feel cause two of my lasting fears for the next generation of accessibility APIs – especially when we factor in the labor costs of retrofitting a new, even if cross-platform, user interface library to the billions of lines of code already deployed around the world.
Moving from the pragmatic and returning to the delivery of contextually interesting semantic information, I have yet to see how a generic control can have enough knowledge of its purpose to deliver truly useful information about what it is doing at any given point of time. A button control, a table control, a list box control or a tree view control to name a few, don’t understand what they contain nor why they are containing it.
I’ll return to our Visio organization chart example. Let’s imagine a very simple box with five names in them, Will, Chris, Peter, Eric and Ted. Because Ted is a hall of famer, we’ll put him at the top and because Eric and Chris are managers, we’ll have them report to Ted. So, our Ted box has two arrows coming from it: one to the Chris box and the other to the Eric box. Because Will is a hacker, he will report to Chris directly, so we’ll add an arrow from Chris to Will. As Peter is an ideas guy and a hacker, he will report directly to Eric but indirectly to Chris and Ted, so we’ll add a solid arrow from Eric to Peter and dotted arrows from Ted and Chris to Peter as well. Now, just to make matters interesting, we’ve decided that the ideas guys get to set priorities so Peter and Eric will have dotted lines pointing to Chris as he must have the engineers build what they design.
Our organization has six boxes, one for each person and the bounding box that contains the members. If we assume that our accessibility API is extensive enough to include a rectangle control that understands that it might also be a container and a line control that knows its attributes (dotted, solid, etc.) we still do not have enough information to describe the relationships between the boxes unless the application itself provides a lot of supplementary information about the meaning of boxes and lines as they are used in said application. We can derive this information from the Visio object model but not from a generic collection of controls at any level below the application itself.
Peter suggested that some hybrid might also be a good idea where the AT product gets most of its information from the accessibility API and the truly application specific information from the actual application. I still think that this requires that the application developer do a fair amount of work to expose this information in a usable manner.