AVOCADO - The Virtual Environment Framework
by Henrik Tramberend, Frank Hasenbrink, Gerhard Eckel
and Uli Lechner
AVOCADO is a software framework designed to allow the rapid development of virtual environment applications for immersive and non-immersive display setups like the CAVE (CAVE Automatic Virtual Environment), CyberStage, Responsive Workbench and Teleport. It supports the programmer in all tasks involved with these types of applications.
AVOCADO integrates a variety of different interface devices, is easily extensible and adaptable for new devices to be invented. It is highly interactive and responsive, supports a rapid prototyping style of application development, will enable the development of truly distributed applications, and is targeted at high-end Silicon Graphics workstations and aims to deliver the best performance these machines are capable of.
AVOCADO includes the following concepts:
- Viewer: All kinds of configurations of input and output devices can be assembled to viewers. A viewer builds up the interface between the user and the virtual world. Typical elements of a viewer are the visual, auditory and tactile displays as output devices and spatial trackers, audio or video sources as input devices. In a multi-user environment every user configures his own viewer.
- Scripting: All relevant parts of the systems Application Programming Interface are mapped to Scheme, an interpreted scripting language. This enables the user to specify and change scene content, viewer features and object behavior in a running system.
- Streaming: All objects know how to read and write their state to and from a stream. This is the basic facility needed to implement object persistence and network distribution. Persistence together with streaming support for objects enable the user to write the complete state of the system to a disk file at any time. An initial system state can be read from a disk file as well.
- Distribution: All AVOCADO objects are distributable, and their state is shared by any number of participating viewers. Object creation, deletion and all changes at one site are immediately and transparently distributed to every participating viewer.
- Extensions: The System is extendible by subclassing existing C++ system classes. This concerns object classes as well as classes which encapsulate viewer features. Compiled extensions can be loaded into the system at runtime via Dynamically Shared Objects.
- Interaction: Viewers provide input/output services which can be mapped to objects in the scene. Objects can respond to events generated from input devices or other objects and can deliver events to output devices.
- Visual Rendering: The different displays all have their own rendering mechanism applied to the modeling hierarchy. Only the visual rendering has a direct access through the Performer pipeline. The auditory and the tactile rendering can be calculated on a second computer, connected to the master by a fast network.
Visual Data Processing
The visual data processing is organized in a pipeline and computed in parallel by Performer. This rendering pipeline consists of a set of optional units, for:
- a database connection
- a user application
- the visual culling of the scene
- the intersection of objects,
- the drawing of the scene.
After the modeling hierarchy is updated to its actual state in the application process it is passed on to the culling process which strips all invisible objects. It is important to support this technique by dividing large geometry into smaller, cullable objects. The part of the scene left over after the culling is passed on to the drawing process where it is rendered to the screen with OpenGL. For configurations with more than one visual display system, the appropriate number of pipelines is used.
Auditory Rendering
Rendering the auditory scene has to take into account the position of the observer´s head in the virtual world and in the auditory display as well as the characteristics of the auditory display (ie the loudspeaker configuration). The auditory rendering process is a two stage process. In the first stage a source signal is synthesized and in the second stage it is spatialized. In the first stage only the sound model parameters are needed by the rendering process. In the second stage the signals driving the auditory display are computed as a function of the distance between observer and sound source, the radiation characteristics of the source and the signature of the acoustic environment.
With these signals the auditory display produces the illusion of a sound source emitted from a certain position in a certain acoustic environment shared by the observer and the source. The sound rendering is a dynamic process that takes into account movements of the observer in the display, movements in the virtual world, and movements of the sound source. If these movements are faster than about 30 km/h, the pitch changes due to Doppler shift are simulated as well.
Tactile Rendering
The CyberStage display has a set of low-frequency emitters built into its floor. This allows vibrations to be generated, which can be felt through the feet and legs. There are two main areas of application of this display component. First, low frequency sound (which cannot be localized) can be emitted to complement the loudspeaker projection. Second, specially synthesized low frequency signals can be used to convey attributes such as roughness or surface texture.
The vibration display is handled like sound in the rendering process. Sound models are used to generate the low-frequency signals. Sound synthesis techniques, generally referred to as granular synthesis, are very well suited to produce band-limited impulses that may represent surface features. Such features can be displayed through user interaction. For instance, a virtual pointing device can be used to slide or glide over a surface and produce vibrations. Additionally, higher-frequency sound can also be produced if necessary. Some of what can be felt usually through the skin of our fingers when sliding over an object is presented to our feet. This sensation can complement sound and vision dramatically.
Please contact:
Henrik Tramberend - GMD
Tel: +49 2241 14 2364
E-mail: henrik.tramberend@gmd.de
Frank Hasenbrink - GMD
Tel: +49 2241 14 2051
E-mail: frank.hasenbrink@gmd.de
Gerhard Eckel - GMD
Tel: +49 2241 14 2968
E-mail: gerhard.eckel@gmd.de
Uli Lechner - GMD
Tel: +49 2241 14 2984
E-mail: ulrich.lechner@gmd.de