The project is looking into input systems for interacting with 3D environments. Because our input systems up until now have been largely two-dimensional or difficult to use, we haven't had a natural way to interact with 3D simulated environments. Using the Microsoft Kinect, we can both represent the body in virtual space and use natural gestures to control the environment.

Analyzing the data
received
from
two
Kinect systems allows for creation of a
real-time,
digital
model
of
the
body.
This
body
represents
an
avatar
that
corresponds
to
the
user’s
location
in
space,
allowing
them
to
interact
with
virtual
objects.
As
a
supplement
to
physical
interaction,
a
gesture-based
user
interface
provides
the
user
greater
control
in
simulations
(e.g.
navigation,
selection).
By
using
the
hands
rather
than
other,
more
restrictive
input
systems,
the
experience
becomes
more
immersive
and
the
user
can
focus
on
their
data
analysis,
training,
or
whatever
other
goals
they
may have.