by Arthur I. Karshmer
The use of modern computers and software by the visually handicapped has become more difficult over the past few years. In earlier systems the user interface was a simple character based environment. In those systems, simple devices like screen readers, braille output and speech synthesizers were effective. Current systems now run Graphical User Interfaces (GUIs) which have rendered these simple aids almost useless. In no area has this problem become more important than in technologies for the handicapped. What has become enabling technology for the sighted has become disabling technology for the visually impaired in our society.
It is difficult to find literature describing the design and testing methodologies used in the production of the vast majority of interfaces currently on the market. It seems that most are built in an ad hoc manner, with a particular problem in mind. Little concern is paid to underlying structures or the capabilities of the users. While this approach works in the design of individual products, it does so in a very immediate manner without regard for any broader solution.
Until now the philosophical approach taken was to add tools of convenience to existing interfaces to suit the visually handicapped user. These systems involve the synthesis of technologies that could be added to existing systems to provide a more comfortable interface without much concern for the bandwidth or the generality of the interface. As a result, we now find ourselves with a wide variety of interfaces with little consistency or concern with the underlying problem.
Our current view is that we must conduct experiments to better understand underlying physical and mental processes to help design better tools. This research should be a joint effort involving a range of expertise. Our team, for example, includes traditional computer scientists, human interface design experts, and several psychologists.
The problems we have chosen to study are centered around the use of alternative methods of allowing a visually impaired (in our case, totally blind) user to navigate through complex menu structures and the window-oriented GUI desktop. Our testbed systems are designed to help us understand the underlying structures associated with these activities, as opposed to the final design of a specific product. The test tools are easily modified to permit running new experiments in rapid succession. Currently, we are experimenting with a variety of spoken and tonal cues to aid in such navigational activities. We are testing a wide variety of such tools with the hope of understanding why certain combinations work and others don't.
Considerable research has been done to identify non-speech sounds for use in sound enhanced interfaces. In our system, non-speech based sounds are being tested as tools to aid in the navigation of complex hierarchical structures, namely menus. Work by Edwards examined associating sounds with screen objects. Another proposal by Gaver is to use a system of naturally occurring sounds or 'everyday sounds' to represent actions within the interface.
While researchers have used sound to enhance learning and usability of computer interfaces, the choice of which sounds (pitches) to use has often been made without the benefit of solid empirical data. The frequent implication is that an individual who is not musically trained might not be able to make use of real musical information presented by the interface. There is however, a growing body of research which indicates that even untrained listeners can consistently identify sounds in a given tonal context. Studies have shown positive results in 'well-fittedness,' indicating the presence of perceivable qualities in sounds from an established context.
Please contact:
Arthur Karshmer New Mexico State University
Tel: +1 505 646 3724
E-mail: arthur@cs.nmsu.edu