Auditory User Interfaces: Toward the Speaking Computer describes a speech-enabling approach that separates computation from the user interface and integrates speech into the human-computer interaction. The Auditory User Interface (AUI) works directly with the computational core of the application, the same as the Graphical User Interface.
The author's approach is implemented in two large systems, ASTER - a computing system that produces high-quality interactive aural renderings of electronic documents - and Emacspeak - a fully-fledged speech interface to workstations, including fluent spoken access to the World Wide Web and many desktop applications. Using this approach, developers can design new high-quality AUIs.
Auditory interfaces are presented using concrete examples that have been implemented on an electronic desktop. This aural desktop system enables applications to produce auditory output using the same information used for conventional visual output.
Auditory User Interfaces: Toward the Speaking Computer is for the electrical and computer engineering professional in the field of computer/human interface design. It will also be of interest to academic and industrial researchers, and engineers designing and implementing computer systems that speak. Communication devices such as hand-held computers, smart telephones, talking web browsers, and others will need to incorporate speech-enabling interfaces to be effective.