Below, some (low resolution) older demos are kept that were pioneering in their time.

Some pioneering demos

The virtual lab entrance located at the VR-Lab.
This is how we started in the 1990s.

The walk in the virtual lab ...

... and to the window out.

Now approaching the speedbike.

Finally arriving at the work place. Some bars are connected using gesture and speech...
The SGIM
project (1996-1999) investigated gesture and speech based human-computer interaction
using large-screen displays.

This work is now used in several new projects where gesture and speech interaction is required...

The
"Articulated Communicator" is a virtual humanoid agent that is to
produce complex multimodal utterances (comprising speech, gesture,
speech animation, and facial expression) from symbolic specifications
of their outer form.

Our Text-to-Speech system builds on and extends the capabilities of txt2pho
and MBROLA: It controls prosodic parameters like speechrate and intonation.
Furthermore, it facilitates contrastive stress and offers mechanisms to
synchronize pitch peaks in the speech output with external events.

Recognition of iconic gestures:
Gestures can be used to indicate the shape of objects.
In the example the user shows a cube, a cylindrical object, and a bar with his hands.
The system determines the most appropriate object that matches the gestural description.

Imaginal prototypes are parametric shape representations for 3D object recognition.
Imaginal prototypes can be defined at several levels of abstraction.
The video demonstrates how an abstract skeleton model is used for the
classification of different types of airplanes (and even only partially assembled toy airplanes). Caution! 17MB mpg-file, takes a moment to load...

From the AkuVis project: A moment during interaction with a changeable landscape illustrating the noise conditions in a city district of Bielefeld...