Man of a thousand faces

9/8/2010, by Rachel Shafer

THANKING THE ACADEMY: Debevec (far right) accepts the techie equivalent of an Academy Award for computer graphics work he and his collaborators performed in Avatar. (Photo by Academy of Motion Picture Arts and Sciences.)Over the last decade, the line between real and virtual in motion pictures has grown even blurrier with the rise of computer-generated imagery (CGI). If CGI is done well, you could be looking at a pixilated Brad Pitt, not the hunky star himself, and you’d be none the wiser.

Any visual effects supervisor will tell you that one of CGI’s biggest challenges is replicating faces. Humans look at faces every day and expertly distinguish fact from fiction. But technology is catching up, thanks, in part, to a Berkeley engineer.

Paul Debevec (Ph.D.’96 EECS) is a friendly, congenial academic with a love of movies who has engineered an ingenious system to make digital animation, in particular human faces, more realistic. His specialty? Light.

As director of the Graphics Lab at the University of Southern California (USC)’s Institute for Creative Technologies, Debevec helped develop a scanning device called Light Stage and its accompanying modeling software. “We needed a way to not only accurately map facial geometry down to the smallest scale but also capture how the face reflects light,” explains Debevec, a research associate professor of computer science.

In February, Debevec and several colleagues won a Scientific and Engineering Award. Given by the Academy of Motion Picture Arts and Sciences, this honor—the techie version of an Oscar—recognized the group’s latest Light Stage work in the groundbreaking blockbuster Avatar, the highest grossing film of all time in the United States and Canada.

SPHERE OF LIGHT: Actor Alfred Molina in an early version of the Light Stage. Molina’s CGI twin played the multi-tentacled villain, Doc Ock, in the 2004 film Spider-Man 2. His digital character fooled audiences with visually realistic appearances battling Spider-Man in midair, then sinking into the Hudson River in a dramatic full-screen facial close-up. (Photo by USC ICT) In Avatar, the main human characters have CGI avatars in the form of blue-skinned creatures called the Na’vi. Director James Cameron wanted the digital avatars to closely reflect the detailed shape of the real actors, and faces were particularly important. So, working with Cameron’s CGI team at Weta Digital, Debevec and his researchers invited the leading actors, Sigourney Weaver, Sam Worthington, Zoe Saldana, Joel Moore and Stephen Lang, to their lab.

Each actor spent a couple hours in the Light Stage, acting out various facial expressions needed for their avatar. The Light Stage is a sphere-shaped icosahedron two meters in diameter that almost entirely surrounds the subject. Attached to the sphere are 468 white Luxeon V LED lights. Over the course of three seconds, an eight-bit microprocessor flashes the lights in arbitrary pattern sequences and simultaneously triggers the shutter of two cinema-grade digital cameras. The cameras can record up to 4,800 frames per second at a resolution of 800 by 600 pixels.

The resulting scans display the actor’s face in every possible lighting condition and in such detail that skin pores and wrinkles can be seen at the sub-millimeter level. Light is captured as it reflects and refracts below the skin’s surface; direct and indirect light are separated out in a breakthrough technique that, during subsequent steps in the process, gives digital artists much greater control as they animate.

Debevec’s team then plugs the data into specialized algorithms that churn out highly complex facial models. Weta Digital used the models to help faithfully reproduce the detailed skin structure and texture of the Avatar actors. The Light Stage system also helped Weta artists light the CGI faces postproduction to accurately match the lighting of real scenes that had already been filmed.

NOT SO FAR, FAR AWAY: Debevec’s team is working on new projects like this 3-D imaging technology. The Star Wars TIE fighter references the 1977 movie that first dramatized such an idea. Of course, that was a special effect. Debevec’s project is real. (Photo by USC ICT)“These are very famous faces, and you don’t want the digital version to look stilted and wooden or it will immediately throw a viewer out of the story,” Debevec explains. “As with real actors, the way you light digital characters is extremely important.”

The team used a similar scanning process in Superman Returns, Spider-Man 2 and 3, Hancock, King Kong and The Curious Case of Benjamin Button.

Debevec has pioneered a number of techniques in computer graphics and visual effects, among them high dynamic range imaging (HDRI) and radiance maps. HDRI is a method of digitally capturing and editing the full range of radiant light present in a real world scene, which can’t be captured either by film or digital cameras alone but is achieved by combining multiple shots.

A consummate researcher, Debevec and his team publish six or so papers a year and, at any given time, are working on five different research projects in addition to several industry collaborations. At the moment, one of their biggest projects is the next iteration of Light Stage, Light Stage 6, a 26-foot sphere with more than 6,000 LED lights that will scan a full body in motion.

The lab is also delving into next-generation virtual reality, creating prototypes for 3-D imaging, video teleconferencing and, most recently, lighting for interactive virtual people. One application, Debevec says, might be a virtual mentor. Another is a virtual museum guide (now being tested at the Museum of Science in Boston, with funding from the National Science Foundation) capable of responding to questions from visitors. Not as heart-stopping as Brad Pitt, but another way Paul Debevec is illuminating our future world.

Read the full story in Forefront’s fall 2010 issue, due to hit mailboxes on or about Nov. 8, 2010.