Daniel Vlasic

Transferring existing mesh deformation from one character to another
is a simple way to accelerate the laborious process of mesh
animation. In many cases, it is useful to preserve the semantic characteristics
of the motion instead of its literal deformation. For example,
when applying the walking motion of a human to a flamingo,
the knees should bend in the opposite direction. Semantic deformation
transfer accomplishes this task with a shape space that enables
interpolation and projection with standard linear algebra. Given
several example mesh pairs, semantic deformation transfer infers
a correspondence between the shape spaces of the two characters.
This enables automatic transfer of new poses and animations.

Details in mesh animations are difficult to generate but they have
great impact on visual quality. In this work, we demonstrate a practical
software system for capturing such details from multi-view
video recordings. Given a stream of synchronized video images
that record a human performance from multiple viewpoints and an
articulated template of the performer, our system captures the motion
of both the skeleton and the shape. The output mesh animation
is enhanced with the details observed in the image silhouettes. For
example, a performance in casual loose-fitting clothes will generate
mesh animations with flowing garment motions. We accomplish
this with a fast pose tracking method followed by nonrigid deformation
of the template to fit the silhouettes.

Commercial motion-capture systems produce excellent in-studio
reconstructions, but offer no comparable solution for acquisition
in everyday environments. We present a system for acquiring motions
almost anywhere. This wearable system gathers ultrasonic
time-of-flight and inertial measurements with a set of inexpensive
miniature sensors worn on the garment. After recording, the information
is combined using an Extended Kalman Filter to reconstruct
joint configurations of a body. Experimental results show that even
motions that are traditionally difficult to acquire are recorded with
ease within their natural settings.

Face Transfer is a method for mapping videorecorded performances
of one individual to facial animations of another. It extracts
visemes (speech-related mouth articulations), expressions,
and three-dimensional (3D) pose from monocular video or film
footage. These parameters are then used to generate and drive a
detailed 3D textured face mesh for a target identity, which can be
seamlessly rendered back into target footage. The underlying face
model automatically adjusts for how the target performs facial expressions
and visemes. The performance data can be easily edited
to change the visemes, expressions, pose, or even the identity of
the target—the attributes are separably controllable.

We present new hardware-accelerated techniques for rendering surface
light fields with opacity hulls that allow for interactive visualization
of objects that have complex reflectance properties and elaborate
geometrical details. The opacity hull is a shape enclosing the
object with view-dependent opacity parameterized onto that shape.
We call the combination of opacity hulls and surface light fields the
opacity light field. Opacity light fields are ideally suited for rendering
of the visually complex objects and scenes obtained with 3D
photography. We show how to implement opacity light fields in
the framework of three surface light field rendering methods: viewdependent
texture mapping, unstructured lumigraph rendering, and
light field mapping.