Although extensive safety measures and safe working procedures have been applied to improve and secure metal working machines, they still put their operators at risk. These risks might often result from manipulation errors, in particular if safety measures are ignored. In this contribution, a safety evaluation strategy has been developed that applies VR and mixed reality technologies to investigate the usability of working machines. An automatically controlled machine tool was simulated and connected to a real input panel, usually employed in industrial settings. However, Human-Machine Interfaces are sometimes built in a way that does not prevent the operator from cognitive misinterpretations which in turn might result in mistakes. To take that into account, a control program for a lathe was altered by hiding a typical programming mistake in the lines of code. Subjects were given the task to evaluate the program in single step mode and to report abnormalities while running the simulated lathe, comparable to new control program checks at real machines. An evaluation of the study demonstrated that even experienced metal workers accepted the simulation and reacted as if the given task was real. Behavioural data of considered subjects showed comparable profiles and most subjects rated the VR- based approach as a reasonable means for investigating work safety problems.

This contribution presents an easy to implement 3D tracking approach that works with a single standard webcam. We describe the algorithm and show that it is well suited for being used as an intuitive interaction method in 3D video games. The algorithm can detect and distinguish multiple objects in real-time and obtain their orientation and position relative to the camera. The trackable objects are equipped with planar patterns of five visual markers. By tracking (stereo) glasses worn by the user and adjusting the in-game camera's viewing frustum accordingly, the well-known immersive "screen as a window" effect can be achieved, even without the use of any special tracking equipment.

The FIVIS simulator system addresses the classical visual and acoustical cues as well as vestibular and further physiological cues. Sensory feedback from skin, muscles, and joints are integrated within this virtual reality visualization environment. By doing this it allows for simulating otherwise dangerous traffic situations in a controlled laboratory environment. The system has been successfully applied for road safety education applications of school children. In further research studies it is applied to perform multimedia perception experiments. It has been shown, that visual cues dominate by far the perception of visual depth in the majority of applications but the quality of depth perception might depend on the availability of other sensory information. This however, needs to be investigated in more detail in the future.

This contribution describes an optical laser-based user interaction system designed for virtual reality (VR) environments. The project's objective is to realize a 6-DoF user input device for interaction with VR applications running in CAVE-type visualization environments with flat projections walls. In case of a back-projection VR system, in contrast to optical tracking systems, no camera has to be placed within the visualization environment. Instead, cameras observe patterns of laser beam projections from behind the screens. These patterns are emitted by a hand-held input device. The system is robust with respect to partial occlusion of the laser pattern. An inertial measurement unit is integrated into the device in order to improve robustness and precision.

In this contribution, we present several improvements to previous “inside-out” techniques for pointing interaction with large display systems. Fiducial markers are virtually projected from an interaction device's built-in camera onto the displays and overlaid to the display content. We reconstruct the 6-DoF camera pose by tracking these markers in real-time. For increased robustness, the marker pattern is dynamically adapted. We address display lag and high pixel response times by precisely timing image captures. Pointing locations are measured with sub-millimeter precision and camera positions with sub-centimeter precision. An update rate of 60 Hz and a latency of 24 ms were achieved. Our technique performed comparably to an OptiTrack system in 2D target selection tasks.

The objective of this research project is to develop a user-friendly and cost-effective interactive input device that allows intuitive and efficient manipulation of 3D objects (6 DoF) in virtual reality (VR) visualization environments with flat projections walls. During this project, it was planned to develop an extended version of a laser pointer with multiple laser beams arranged in specific patterns. Using stationary cameras observing projections of these patterns from behind the screens, it is planned to develop an algorithm for reconstruction of the emitter’s absolute position and orientation in space. Laser pointer concept is an intuitive way of interaction that would provide user with a familiar, mobile and efficient navigation though a 3D environment. In order to navigate in a 3D world, it is required to know the absolute position (x, y and z position) and orientation (roll, pitch and yaw angles) of the device, a total of 6 degrees of freedom (DoF). Ordinary laser-based pointers when captured on a flat surface with a video camera system and then processed, will only provide x and y coordinates effectively reducing available input to 2 DoF only. In order to overcome this problem, an additional set of multiple (invisible) laser pointers should be used in the pointing device. These laser pointers should be arranged in a way that the projection of their rays will form one fixed dot pattern when intersected with the flat surface of projection screens. Images of such a pattern will be captured via a real-time camera-based system and then processed using mathematical re-projection algorithms. This would allow the reconstruction of the full absolute 3D pose (6 DoF) of the input device. Additionally, multi-user or collaborative work should be supported by the system, would allow several users to interact with a virtual environment at the same time. Possibilities to port processing algorithms into embedded processors or FPGAs will be investigated during this project as well.