Three Dimensional Web Interfaces

By Will Pearson and Chris Hofstader

A couple of days ago Will watched Intel CTO, Justin Ratner make his
keynote address to
the Intel Developer Forum (IDF). Ratner addressed the “3D web” during
his speech.

The “3D web” is basically virtual reality environments, such as Second
Life, simulations, etc. Justin showed three “3D web” applications
during his keynote: one that was aimed at businesses and two that were
aimed at medical training and simulation. The “3D web” is nothing
new, people have been working on collaborative virtual environments
for years What makes it interesting now, is that it has caught the
attention of the CTO of one of the leading chip vendors.

What makes the “3D web” really interesting from an accessibility
perspective is that it is totally incompatible with the concept of a
screen reader. The
language of the “3D web” is based around simulations of real world or
imaginary objects and not text. So, the “3D web” is fundamentally
incompatible
with a current generation computer access program like a screen
reader. . To make a 3D web application compatible with current access
technology will require changing the concept of the 3D web so
significantly that
it really just becomes part of the 2D web.

We think this is going to be a massive headache for the screen reader
vendors, and something that will be much more of a problem than Capcha
as, while the Turing tests may be the “Whites only” sign of the 21st
century, one can get help from a friendly sightie to get into a site,
clearly a sub-optimal approach and one that requires a degrading lack
of independence but an approach that will work in a pinch. The 3D web
must be addressed by web accessibility researchers, human factors
experts and, of course, commercial vendors of access technology. If
the predictions made by industry pundits are correct, a lot of the
community activities that
are currently run on things like email lists and web forums will move
into the 3D web.

There will also be a lot of training and simulation programs built
using the 3D web. So, it looks as though it will be essential that we
blinks gain access to the 3D web.

In conversations we have had with people at various commercial access
technology companies, Mike Calvo of Serotek seems to stand alone by
having committed to a major effort to support dynamic web sites
delivered through AJAX and other Web 2.0 technologies. neither of us,
believes that any vendor of a current screen reader has even started
working on a presentation model for the 3D web but I (BC) do think
that I heard that some research dedicated to finding a non-visual
solution to the 3D web started at one of the really large companies
(probably IBM but I do not remember exactly).

A number of years ago, an article called “The Guru of the News” a
parody of “The Wizard of Oz” got passed around various news groups,
email lists and was emailed directly to a lot of people as is the case
for many amusing Internet creations. The gist of the story was that
Richard Stallman, the legendary hacker, was actually the man behind
the curtain and that he worked to maintain a text only Internet as all
of the pictures and such simply distracted from the serious
information. As I said, this was a parody and written in fun.
Stallman never worked against the advances of the graphical web but
the story provides a few laughs anyway.

As recently as 2004, though, I have attended conferences in which
blind people and advocates for people with disabilities argued
strenuously against any web standards that did not conform to a purely
text presentation model. These people tended to use the Lynx browser
or the W3 emacs plug-in to read web sites. While these people
represented a small minority of computer users with vision impairment,
they shouted quite loudly and, in many cases, convinced web developers
to provide blind-guy-ghetto, text only alternatives to web sites that
worked quite well with JAWS or Window-Eyes. I think that the text
only people also caused a slow down in the adoption of web
accessibility standards and guidelines as, although the people who
worked on the WAI committees and other standards bodies around the
world devised many excellent ways to deliver text alternatives to
graphical information, the ghetto dwelling, text only ludites
continued to push for text only pages. My answer to those people who,
in 2004 still used Lynx or W3 was that they had the damned source code
to their browsers and should fix the problems with graphical
presentations themselves.

So, as we move toward a 3D web, will we hear the cries of blind people
using JAWS, Window-Eyes, System Access, HAL, VoiceOver, orca, NVDA or
any of the other current screen readers to provide text only or 2D
alternatives to interfaces exposed by sites like Second Life?

To date, I (BC) have not spent much time thinking about a non-visual
presentation model for 3D web interfaces. I don’t know if anyone has
even started exploring a user experience for accessing 3D web sites
that one can use without any visual clues. I would like to hear from
anyone who has started thinking about this problem and would enjoy
reading anything that may have been published on the subject.

Afterward

I’ve very much enjoyed the lively discussion in the BC comments area
lately and thank everyone for their constructive posts.

I would, however, like to point out to the anonymous commenter who
claims to having never seen such a “dysfunctional” group before. I
suggest that anyone who makes such non-constructive comments could
look in a mirror and see the single most dysfunctional person in his
or her life.

— End

Subscribe to the Blind Confidential RSS Feed at: http://feeds.feedburner.com/ Blindconfidential

Published by

chris.admin

I'm an accessibility advocate working on issues involving technology and people with print impairment. I'm a stoner, crackpot, hacker and all around decent fellow. I blog at this site and occasionally contribute to Skepchick. I'm a skeptic, atheist, humanist and all around left wing sort. You can follow this blog in your favorite RSS reader, and you can also view my Twitter profile (@gonz_blinko) and follow me there.

8 thoughts on “Three Dimensional Web Interfaces”

  1. Though I agree with most of what you have stated, I would just like to point out one fact with respect to CAPTCHA, and the reason for my continued, insistent advocacy for its accessibility.

    Although CAPTCHA is primarily used for initial signup, it is being increasingly used in situations where its requirement is more frequent than the initial registration process. How about a couple of examples.

    1.
    Earthlink’s anti-spam challenge requires solving an inaccessible CAPTCHA in order to be allowed to send e-mail to their customers who have enabled their spam protection. Unless you’re already on their approved list, it is quite likely such users will never receive mail from blind people, simply based on our lack of physical eye sight.

    2.
    Ticketmaster similarly does not allow blind people to order tickets online due to an inaccessible CAPTCHA. It isn’t required for signing up for an account, but it is required each time you would like the privilege of doing business with Ticketmaster. A similar issue continues with GoDaddy.com.

    3.
    Social Networking sites such as Facebook and MySpace require solving CAPTCHAs for many more things than just signing up. For example, even after implementing reCAPTCHA for the signup process, I quickly discovered this work was woefully incomplete when I was required to solve an inaccessible CAPTCHA in order to do something as simple as add a friend, and that was *after* I had already signed into the account!

    Not to say that 3D web accessibility is not critical, but CAPTCHA is a critical issue we need to have corrected right now, as it is locking us out right now! The 3D web issue is also important, from a research and development angle at this time.

  2. If anyone is interested in watching Justin Rattner’s IDF keynote then you can find it at:
    http://www.podtech.net/home/4200/wrap-up-news-from-idf-virtual-worlds-and-the-3-d-internet

    I’m researching into aspects of haptic collaborative virtual environments as my PhD. So, I spend a lot of time thinking about collaboration in virtual space, how the user experience can be improved, and, just occasionally, what collaborative virtual environments may be used for. Whilst I wasn’t a big fan of collaborative virtual environments when I started my PhD, as I initially took the position more to work with haptics than to work on collaborative systems, I’ve realised just how beneficial collaborative virtual environments could be as I’ve increasingly worked with them.

    Most of the popular press has focused on the virtual community aspects of virtual environments. The virtual communities that have sprung up in environments such as Second Life by Linden Lab’s are just one aspect of collaborative virtual environments. The key advantage that collaborative virtual environments offer is that they allow two or more people to manipulate or discuss the same set of objects in a virtual world whilst they are geographically separated in the real world. These activities are facilitated by colocation in virtual space and colocation in the real world would require that the participants in the task travel to the same physical location. So, being able to collaborate on a task with someone from a different geographic area, say a different country, without having to travel is likely to hold lots of benefits for various industrial and business sectors, such as manufacturing, engineering, design, architecture, etc.

    A very trivial example that illustrates the differences is the task of building a tower out of blocks. Two people are asked to build a tower out of a single set of blocks. They have no restrictions on the design and are free to discuss and choose any design they want for the tower. In order to successfully complete this task without the use of a virtual environment the two participants would have to meet in the physical world. This would allow them to both view the current state of the blocks as the tower is formed and to manipulate the blocks to build the tower and make changes to its design. Using a virtual environment the blocks and the participants can meet up in this virtual space without having to move from whatever location in the physical world they have to be in.

    Whilst the example of building a tower out of blocks is fairly trivial there are some very serious applications that collaborative virtual environments could be used for. One scenario is the master and apprentice scenario. An expert in a physical task exists somewhere in the world and there is someone else in the world who wants to carry out that task. In order to get the advice of the expert or to have them participate in the task the two have to meet up. A similar scenario is that of a design team. Instead of having an expert in a physical task all of the participants in the team are equal but all need to participate in the task. These scenarios are found in medicine, manufacturing, engineering, construction, and most sectors that involve performing physical tasks.

    Looking beyond collaboration, virtual environments could also play a role in controlling remote equipment. If sensors were attached to equipment then the data from those sensors could be used to update a virtual proxy of the equipment. This would allow someone to monitor the state of equipment during a task that they were performing remotely. Remotely controlling equipment could also be useful in collaboration as it allows two or more people to share the same equipment and this could be useful for very expensive equipment.

    So, there are uses for virtual environments beyond virtual communities. Virtual communities are an important aspect of virtual environments and the virtual objects that are found in virtual environments do enhance communities and add extra dimensions to them, but there are also serious business applications for virtual environments.

    I’m fairly sure that the ability to use virtual environments will become increasingly important. Important business and social functions are already starting to take place in virtual environments: companies are running training in them, they are using them for presentations, educational institutions are making use of virtual environments to run their courses, and people from different parts of the world are remotely collaborating on tasks associated with their industrial or business sector. At the moment blind people are prevented from participating in all this because of the conceptual conflict that exists between collaborative virtual environments and screen readers. Virtual environments work with objects where as screen readers work with text. To change virtual environments so that they work with text would be to destroy the virtual environment concept and everything that is beneficial about it. So, new forms of access technology need to be developed that work with the concept of virtual environments in their current form.

    Some research has already been done into methods that blind people can use to access collaborative environments. They typically take the form of custom made collaborative environments, such as with the Micole project, but the interaction methods should be transferable to any collaborative environment. There is also ongoing work on audio based virtual environments and haptic only virtual environments, such as my work on haptics.

    Will

  3. This is a very professionally written article. It’s beyond important for people to consider each and every aspect of the potential audiences. Even though that’s not always very easy to do.

  4. I don’t disagree that capcha is an important issue and one that is blocking blind people from performing a range of tasks at the moment. Whilst the existing virtual environments aren’t locking out blind people to the same extent as capcha at the moment they are still preventing blind people from participating in some tasks.

    One interesting trend that I’ve noticed is that more and more technologies are starting to become incompatible with screen readers at a conceptual level. It used to be that the majority of accessibility problems were due to developers not using the appropriate function calls, such as TextOut, to enable screen readers to detect text on a screen or that developers failed to provide keyboard navigation within their software. Whilst this prevented screen readers from functioning correctly these problems could be fixed. At the conceptual level software that had these types of problems was usually conceptually compatible with a screen reader. I think that we are starting to see a new type of accessibility problem now. The new problem we are facing seems to be conceptual incompatibilities between technologies such as capcha and the 3D web and screen readers. The concept of a screen reader is fairly simple; it reads the text on the screen and simulates attentional focus. If a technology involves no text then the concept of a screen reader breaks. This is fundamentally why technologies such as capcha and the 3D web will never work with a screen reader.

    So, it would seem that we need to start rethinking the concept of a screen reader if we want to avoid these conceptual incompatibilities in the future. Whilst I expect text to be around for a long time to come I also expect the number of technologies that do not use text to grow; text isn’t the only form of communication and it is fundamentally wrong from a communications perspective to base accessibility on a single form of communication.

  5. I think the discussion we had as a WAI/RD virtual workshop on Access to Visualization is still quite relevant, here. There I was pushing a VR framework for working the problem. Much like Raman’s insistent push for model-based web forms.

    One important thing to note is where there are tendrils of practice reaching out from the research frontier toward emerging mass market practice. Among these I would again mention X3D, Ron Kikinis’s work including an open source library for rendering medical imagery, and the cheap force-feedback joystick that makes the UniTherapy technique something you can send stroke patients home with.

    Note that the screen reader paradigm maps a 2D interface to a 1D interface, unless you are among the minority using Braille. So 3D and beyond is indeed a new ball game. I can’t even get people off the 1.5D TableOfNavigation paradigm to understand the free 2D graphic composition of portal mash-ups unless they have vision. And the money is off chasing the holy grail of interactive TV on your mobile phone. Learn MPEG 21. The good news is that IIRC MPEG are basing their interaction layer on X3D. And mobile devices prefer vector graphics. But still the good news is the exception. What they say is ‘broadband’ and what you get is ‘wideband,’ for the most part. The model is in the difference.

  6. Perhaps we should not focus exclusively on screen readers and haptics to provide access for blind people in 3D virtual reality. If the aim of virtual reality is to become more and more life like, lets think about the actual real life experience of individuals moving about in the real world and how they interact with other people.

    Blind and low vision people generally are mobile outside familiar surroundings with the aid of a cane, a guide dog or a sighted companion. When more assistance is needed, there is usually a store staff person or a passerby to whom one can ask for directions or other information. This latter is not something that just blind people do. It is natural human behaviour.

    Why not have a service avatar to provide a similar service. Imagine a humanoid robot like C3PO, the protocol android in Starwars, who could guide the avatar of a player, give verbal directions, describe scenes and activities, etc. This is rather like a personal tour guide. Add some more services, like language translation for players in other countries, ASL for players who are deaf, information retrieval to answer questions knowlegeably and you broaden the appeal and usefulness of such an avatar. They would serve more than just the sight impaired players.

    I think there is a lot of technology that is already out there that could be brought to bear on this. In Japan, for example, some stores have robots that can greet customers and even take them to a particular department. voice and natural language recognition, text to speech and text to ASL engines, language translation software are already very advanced and improving. The underlying architecture of the virtual space must have some basic navigation functions that might respond to verbal commands in lieu of a joystick or whatever it is that players use to travel about in Second Life.

    A service companion avatar should probably become a standard feature in 3D virtual reality in the same way that online help is a ubiquitous feature in Windows.

Leave a Reply

Your email address will not be published. Required fields are marked *