Graphics and Vision

Details in mesh animations are difficult to generate but they have
great impact on visual quality. In this work, we demonstrate a practical
software system for capturing such details from multi-view
video recordings. Given a stream of synchronized video images
that record a human performance from multiple viewpoints and an
articulated template of the performer, our system captures the motion
of both the skeleton and the shape. The output mesh animation
is enhanced with the details observed in the image silhouettes. For
example, a performance in casual loose-fitting clothes will generate
mesh animations with flowing garment motions. We accomplish
this with a fast pose tracking method followed by nonrigid deformation
of the template to fit the silhouettes.

We accurately capture the shape and appearance of a person's hairstyle. We use triangulation and a sweep with planes of light for the geometry. Multiple projectors and cameras address the challenges raised by the reflectance and intricate geometry of hair. We introduce the use of structure tensors to infer the hidden geometry between the hair surface and the scalp. Our triangulation approach affords substantial accuracy improvement and we are able to measure elaborate hair geometry including complex curls and concavities. To reproduce the hair appearance, we capture a six-dimensional reflectance field. We introduce a new reflectance interpolation technique that leverages an analytical reflectance model to alleviate cross-fading artifacts caused by linear methods.

Commercial motion-capture systems produce excellent in-studio
reconstructions, but offer no comparable solution for acquisition
in everyday environments. We present a system for acquiring motions
almost anywhere. This wearable system gathers ultrasonic
time-of-flight and inertial measurements with a set of inexpensive
miniature sensors worn on the garment. After recording, the information
is combined using an Extended Kalman Filter to reconstruct
joint configurations of a body. Experimental results show that even
motions that are traditionally difficult to acquire are recorded with
ease within their natural settings.

Animating an articulated 3D character currently requires manual
rigging to specify its internal skeletal structure and to define how
the input motion deforms its surface. We present a method for animating
characters automatically. Given a static character mesh and
a generic skeleton, our method adapts the skeleton to the character
and attaches it to the surface, allowing skeletal motion data to animate
the character. Because a single skeleton can be used with a
wide range of characters, our method, in conjunction with a library
of motions for a few skeletons, enables a user-friendly animation
system for novices and children. Our prototype implementation,
called Pinocchio, typically takes under a minute to rig a character
on a modern midrange PC.

We present an automated approach for high-quality preview of feature-film rendering during lighting design. Similar to previous work, we use a deep-framebuffer shaded on the GPU to achieve interactive performance. Our first contribution is to generate the deep-framebuffer and corresponding shaders automatically through data-flow analysis and compilation of the original scene. Cache compression reduces automatically-generated deep-framebuffers to reasonable size for complex production scenes and shaders. We also propose a new structure, the indirect framebuffer, that decouples shading samples from final pixels and allows a deep-framebuffer to handle antialiasing, motion blur and transparency efficiently. Progressive refinement enables fast feedback at coarser resolution.

We present a new data structure---the bilateral grid, that enables fast edge-aware image processing. By working in the bilateral grid, algorithms such as bilateral filtering, edge-aware painting, and local histogram equalization become simple manipulations that are both local and independent. We parallelize our algorithms on modern GPUs to achieve real-time frame rates on high-definition video. We demonstrate our method on a variety of applications such as image editing, transfer of photographic look, and contrast enhancement of medical images.

Enveloping, or the mapping of skeletal controls to the deformations
of a surface, is key to driving realistic animated characters. Despite
its widespread use, enveloping still relies on slow or inaccurate deformation methods. We propose a method that is both fast, accurate
and example-based. Our technique introduces a rotational regression
model that captures common skinning deformations such as
muscle bulging, twisting, and challenging areas such as the shoulders.
Our improved treatment of rotational quantities is made practical
by model reduction that ensures real-time solution of leastsquares
problems, independent of the mesh size.

Three-dimensional shape can be drawn using a variety of feature lines, but none of the current definitions alone seem to capture all visually-relevant lines. We introduce a new definition of feature lines based on two perceptual observations. First, human perception is sensitive to the variation of shading, and since shape perception is little affected by lighting and reflectance modification, we should focus on normal variation. Second, view-dependent lines better convey smooth surfaces. From this we define view-dependent curvature as the variation of the surface normal with respect to a viewing screen plane, and apparent ridges as the loci of points that maximize a view-dependent curvature.

While measured Bidirectional Texture Functions (BTF) enable impressive realism in material appearance, they offer little control, which limits their use for content creation. In this work, we interactively manipulate BTFs and create new BTFs from flat textures. We present an out-of-core approach to manage the size of BTFs and introduce new editing operations that modify the appearance of a material. These tools achieve their full potential when selectively applied to subsets of the BTF through the use of new selection operators. We further analyze the use of our editing operators for the modification of important visual characteristics such as highlights, roughness, and fuzziness.

Standing is a fundamental skill mastered by humans and animals alike. Although easy for adults, it requires careful and deliberate manipulation of contact forces. The variation in contact confguration (e.g., standing on one foot, on uneven ground, or while holding on for support) presents a diffcult challenge for interactive simulation of humans and animals, especially while performing tasks in the presence of external disturbances. We describe an analytic approach for control of standing in three-dimensional simulations based upon local optimization. At any
point in time, the control system solves a quadratic program to compute actuation by maximizing the performance of multiple motion objectives subject to constraints imposed by actuation limits and contact configuration.