The simulation of human activity and its rendering has to take into account their
interactions with the objects or people in the scene in order to create a realistic
virtual experience. Nowadays, most of the simulation software available track the
position of the head in order to infer the person's gaze and accurately the part
of the scene that the participant is looking at. These approaches were focusing
on simulating scenarios for a single person evolving in a scene. The main use
of such simulations is training, however very little research work was performed
in training a small group of persons. In this case, individual actions have to
be characterized and understood by the system in order to integrate the feedback
from each person. Human activity in the broad sense is very complex to analyze,
we will focus on human gestures in order to understand the activity of the person
and provide a feedback to the simulator. Gestures are commonly used to communicate,
point to specific locations or attract other participants' attention.
The use of these gestures has a large impact on simulations since it provides a natural
feedback from the person to the simulation engine.
We propose to automatically extract and understand these human motion and gestures from a
set of cameras observing the person in the simulation room. This approach does not
require the person to wear any additional sensors and therefore create a more realistic
simulation of real scenarios. This gesture based communications will improve the training
of single persons or group of persons where the feedback from each person will be taken
into account for evaluating the performances of the user or a group of persons in performing
a specific task.