How Effects Wizards Transformed G-Force From 2D to 3D

By the time filmmakers finished the live action/animated hybrid film G-Force, they had rendered 271,955,886,586 photorealistic hairsa number that nears the number of stars in the Milky Way. The film, which centers around guinea-pig secret agents tasked with saving the world from a mechanized menace, contains a giant robot with 78,000 geometric pieces and 16,400,000 polygonsmore complex than even Michael Bay's Transformers. But most remarkably, the filmmakers used new technology to create 3D that's different from anything currently in theaterswhich is impressive, considering that G-Force didn't start out as a 3D movie.

The current stereoscopic craze is focused mainly in the animated arena. That's because 3D, which requires two camera views--one representing the left eye view, one representing the right--to synthesize the illusion of depth, can be accomplished more easily in digitally animated worlds simply by generating another camera view in the computer. Put on polarized glasses, which route the appropriate view to each eye, and you have a 3D movie. Shooting live-action 3D with film-camera rigs is much harder--and even more difficult when you add in animated characters, which need to be placed correctly in space in order to look like they were present during filming.

Advertisement - Continue Reading Below

"You're committing to many of the stereo properties at the time of shooting, and it locks you into how the stereo will play on screen," says director Hoyt H. Yeatman Jr., who did run tests with James Cameron's Pace Reality 3D Camera System (developed for Avatar) to see if shooting live-action 3D was possible for G-Force. "You're deciding the interocular, which is the distance between the two cameras and the convergence point where the cameras are towing in. I really wanted to be able to have that control in post-production. Much like doing editing, sound or color, we don't make any of those decisions when we're shooting our actors--why should you be doing 3D stereo then?" He decided against making the film in 3D.

More From Popular Mechanics

So principle photography on G-Force, produced by Jerry Bruckheimer, proceeded as it does on most movie sets--in two dimensions. But four months into shooting the film, Disney asked if Yeatman could make G-Force in 3D. Yeatman turned to VFX house Sony Pictures Imageworks, which was already creating the film's animated guinea pigs, to figure out how to do it. "We said, 'there is this technique we can use,'" says Rob Engle, 3D visual-effects supervisor at Imageworks. Known as dimensionalizing, it's when artists take a 2D image and pick the layers where they want to create depth--and it's all done by hand. The technique gave Yeatman the control he wanted over the stereo settings in postproduction. "It's the first time a feature-length movie has been dimensionalized in this way," he says.

First, artists scanned the 2D plate photography into the computer, then rotoscoped--or traced--all the elements of the scene. "We're effectively defining the edges of all the objects that are in the photography," Engle explains. Next comes match-moving, where artists create a virtual representation of the set during photography. "Imagine I've photographed a coffee mug on the table," Engle says. "We'd put into the computer a digital coffee mug, a digital table, and a digital representation of where the camera is." This allows animators to place the CG guinea pigs in the virtual scene and, once satisfied with the movement, place that element on top of the original plate. "Everything will feel like it's been photographed at the same time," Engle says.

Next, animators projected the plate, without CG characters, onto the match-move geometry. The original view represented the left eye, and filmmakers took a picture of the plate from the right-eye perspective to synthesize the second view. "Now, I have a picture that represents what we would have seen had there really been a [second] camera on-set in the right-eye position," Engle says.

But because the left and right eyes are separated by about two and a half inches, they each see a slightly different view, creating something called object occlusion. "The left-eye photograph doesn't have all the information we need to see from the right eye," Engle explains. "We're missing information that was never in the left-eye photograph. So we have to fill in that hole, and there are a series of techniques including optical tracking and painting we use to fill in the holes." The final step to create the 3D shot is to integrate the CG guinea pigs (which have been rendered in 3D) into the synthesized 3D plate.

With almost 2000 cuts--a Pixar film typically has 1100 cuts--fast-paced action sequences and near-constant back-and-forth between virtual plates and actual photography, stereoscopic precision in G-Force was key. If the settings were too severe, it could be too painful to watch. So as the artists worked, they checked their progress on stereoscopic viewing stations and in Imageworks' 3D cinema screening rooms. "We do frequent checks on our material," Engle says, "to make sure everything is working just fine."

"[Creating 3D in postproduction] gave us tremendous flexibility," Yeatman says. "You can move things out, you can control it, you can flatten it. It allows you to sculpt the 3D, which is much better than the way we did it with initial [3D] photography."

style="background-color:#333; color:#ccc;">

G-Force Photo Gallery

+ CLICK PHOTOS TO ENLARGEG-Force, the black bars are actually projected as part of the film. "To the movie-going audience it's like a normal mask you find in the theater," Yeatman says. "But on occasion, we have our characters appear to break over top of that frame. And what that does, psychologically, is that it looks immediately to the observer that the object has entered into the theater."

All in all, creating G-Force was a huge endeavor. Storing the film required 515 terabytes of space; the longest render time for one frame was 125 hours. "Basically, turning [G-Force] into a stereoscopic movie made every shot a VFX shot," says Scott Stokdyk, visual-effects supervisor at Sony Imageworks. "We ended up having over 1800 shots in the movie between the 2D team and the 3D team. To give you perspective, Spiderman 3 had 900 shots. And not only that--you have to do the normal visual effects first, then the shot, and then the additional left eye and right eye. So we're actually delivering three times 1800 shots. It was huge."

Still, the film's developers believe that the effort paid off. "We have a 2D version, too," Yeatman says, "and having worked side-by-side on both, the 3D is definitely an amazing advancement over 2D. You just see so much more."

"The 3D was so effortless," agrees Nicolas Cage, who voices Speckles the mole. "I know what kind of work went into that to make it appear effortless--you don't see the line or the edge like we did when we were kids."