Ask anyone about milestones in computer graphics, and the film Terminator 2: Judgment Day (T2) will be at the top of the list. The first Terminator film in 1984 started a franchise. T2, released in 1991, helped enable an industry.

“It set the stage for computer graphics in film,” says Pat Conran, CG supervisor at Industrial Light & Magic, the visual effects studio that pushed computer graphics into the future by creating a liquid-metal terminator for T2, and again created digital effects for the latest sequel in the franchise, Terminator Salvation.

“[Visual effects supervisor] Dennis Muren has said the shot where actor Robert Patrick becomes liquid as he walks through the bars in T2 was the moment when he realized we could now do anything,” Conran says. “So, working on the fourth Terminator film was quite exciting for me.”

The money shot in T2 was of the liquid-metal T-1000 striding out from a fiery explosion, and the key to integrating that character into the filmed footage was ILM’s newfound ability to render and composite a CG character with correct reflections. The environment in the background plate, the light from the fire and the background, and the shadows had to distort correctly as the shiny-surfaced character walked toward camera. At the time, it was painstaking work.

Now, 18 years later, such visual effects are so often taken for granted that it might come as a surprise to learn the breakthrough technology for the fourth film, Terminator Salvation, includes a new set of energy-conserving shaders designed to produce more efficient and accurate reflections, new technology for choreographing the light playing across a surface, and new techniques for creating liquid metal. The artists at ILM didn’t use these new tools to put reflections on shiny surfaces for standout effects as they did in T2, but to create realistic surfaces for CG terminators, cars, trucks, airplanes, and other objects that blend so seamlessly into gritty backgrounds you believe they exist in that world.

The film takes place in 2018. Skynet has launched a nuclear war, and in the aftermath, John Connor leads a group of survivors who try to keep Skynet’s machines from completely annihilating the human race. As he does so, Connor realizes he is facing an alternate reality, not the future foretold to him in the previous films. There’s no shiny, liquid-metal T-1000 in this world.

“McG wanted a harsh, contrasty feel,” says Ben Snow, visual effects supervisor at ILM. “The machines would keep themselves running, but they didn’t care if they looked crappy. They know they have an expiry date, so they do minimum maintenance. They get rusty and beaten up.”

The machines include the T-800 and T-600 humanoid terminators that we see largely in their mechanical endoskeleton forms, eel-like Hydrobots, aerial Hunter-Killers, road-bike Moto-Terminators, and giant harvesters for collecting people.

“Through the course of the film, we’re introduced to ways Skynet can kill people,” says Marc Chu, animation supervisor at ILM, “the sea, air, land—no place is safe.” Of the machines, Stan Winston’s studio built puppets for the T-600 that appear in some shots, a T-800 endoskeleton used primarily for reference, and animatronic Hydrobots that appear in many shots. Otherwise, the terminators are digital, created at ILM.

Moving Machines
All told, about 12 animators worked in Autodesk’s Maya on the shots, with four lead animators managing major sequences and types of creatures. Jakub Pistecky, for example, led the work on the T-800s and T-600s, the intricately mechanical humanoid robots that can cover themselves with flesh. The T-600 appears first, in its mechanical form. “The T-600 was only alluded to in the previous movies,” Chu says. “It’s a predecessor to the T-800, and Skynet hasn’t exactly perfected the technique. Humans can spot them and run away.”
For shots of these terminators interacting with the actors in the film, ILM dressed stunt actors in the studio’s gray iMocap suits, captured their performances during principal photography, and then applied that data to the CG models as a starting point and timing reference for the animators.

Top to bottom: A CG car simulated using the forces from a crash with a moving tow truck flies into the air, creating CG debris that hits the roadway below. Second, the same simulation seen through a different virtual camera. Third, the rendered car positioned into the footage. Fourth, the original footage shot near Albuquerque. The top image, at left, shows the final image rendered with neutral lighting to allow for extreme color timing in DI.

“Any time we had a humanoid CG creature, we wanted tight interaction,” Chu says. “If they could punch, push, and pull, lean weight on, or vice versa, it helped make the action feel more real, like they were interacting with someone.”

ILM also used stunt actors for the Moto-Terminator reference. These machines, which look like road bikes on steroids and with guns, must seem to have intelligence, so ILM put stunt riders on Ducati motorcycles during principal photography to get cues for maneuvering. The Moto-Terminators went through three design cycles, two at ILM after initial concept designs by Martin Laing, before McG got the beefy look he wanted.

“They’re sort of like attack dogs,” Chu describes. “And they have their own consciousness, so we looked at the shots on set with the stunt riders to help get the sensation of weight shifting for the animated machines without a rider on top.”

The aerial Hunter-Killers have bigger guns than in previous films, and they’re no longer shiny, but they act in much the same way. “They stand still in space, hover, take off, and fly like a jet,” Chu says. “We’re introducing a lot of creatures people aren’t familiar with so we tried to preserve some of what they have seen in previous films.”

To rig the machines, ILM used techniques developed originally for Transformers, which give animators flexibility in choosing which parts connect to what (see “Heavy Metal,” July 2007), but for this film, the riggers sometimes put limitations on that flexibility. “We’re working with heavy machinery and equipment,” Chu says. “We wanted the models to move like they would in reality and to be robotic, so sometimes we put boundaries on what the rig could do, and sometimes we just put limitations on ourselves. We needed to get the right pose from the camera angle but still keep an eye on realism.”

The most difficult machine and most intricate rig was that for the harvester, a 60-foot biped that looks something like a headless terminator, but with two main arms, two inner arms, and weapons that pop out. As they had done for the Moto-Terminators, ILM redesigned the harvester to create a beefier machine than in the original concept art. The Moto-Terminators, which function as a team to collect humans, slide out from plates in the harvester’s legs.

For the Hydrobots, ILM matched the Winston models in look and animation style, and intercut them with the practical puppet in several shots. The studio also intercut with Winston’s T-600 terminator, although the fighting shots were generally CG, and they matched the digital T-800’s mechanical endoskeleton to the Winston puppet for close-up shots. The new shaders helped make those transitions seamless.

The Importance of Shaders

“We had a lot of metal,” Conran says. “That’s what we were thinking about going into this film.” Before this movie, ILM’s most recent advancement in creating metal surfaces were the techniques used in Iron Man to create brushed metal (see “Power Suits,” May 2008).

The beefy looking Moto-Terminators rode through three design cycles before director McG got the attack-dog look he wanted. To give the animated bikes intelligent moves, animators referenced footage of stunt riders on location.

“We used [the brushed-metal techniques] where appropriate,” Conran says, “but for this film, we needed cast iron and grungy metal, so we did a major push under the hood with our shaders.” The new Pixar RenderMan shaders and image-based lighting helped the crew create the skin-covered terminators, as well.

Developed by ILM’s research and development department under the leadership of Christophe Hery, these new shaders use “importance sampling” for reflections and raytracing, and in doing so, combine specular highlights and reflection.

Explains Conran, “Traditionally, when you’re raytracing to sample the environment, you cast one ray in a cone; you blur the environment to mimic the surface properties. With our new technique, you sample the environment and the BRDF (bidirectional reflectance distribution function) of the surface. Then, you cast rays and gather light from all parts of the image using multiple rays weighted according to the specular.”

Conran continues: “Usually, specular high­lights are CG cheats for doing reflections. We’re trying to get away from that. If you sample reflections using the specular function, then you marry specular and reflection, which are really the same thing. We get a more realistic look, and we get it quicker.”

The more realistic look comes in part because the surfaces self-reflect more accurately with the new shader.

“The amount of light given out by a light source is the amount that gets reflected back, other than the amount of light the surface absorbs,” Snow explains. “That’s the real light physics. Getting creatures like our terminators to self-reflect is vital. We’ve been cheating self-reflections for as long as we’ve been doing this stuff, and frankly, I think that’s probably what leads to the ‘Oh, it looks CG’ comments. Being direct about the way a surface bounces light itself makes such a difference in the realism. I can’t live without it anymore.”

The shader has another advantage, as well: It gives the lighters more consistent results. “Before, even though specular and reflection are the same, we computed them differently,” says Philippe Rebours, digital production supervisor at ILM. The separate computations for reflections from the surface and from the environment often produced inconsistent looks through a sequence as, for example, lighters moved spotlights.

“Now, specular and reflection are the same, and diffuse and indirect are the same, and the creature reacts the same way as it did on the turntable,” Rebours says. “So that’s a win.” That meant the crew could move the creatures in this film from high-contrast, bleached-out desert environments into dark interiors more often and with less hassle.

“Usually, in the past, even on Iron Man, we had to rejiggle materials to make that possible,” Snow says.

Interestingly, though, the realism was too much for a sequence of shots that take place in a dark factory lit only by sparks and explosions. “The terminator looked too busy,” Conran says. “We had to dial it back.” To do so, the lighting TDs used broader lights and fewer of them.

With all the new shaders came a different way of working. Although the shaders reduced the number of pre-passes necessary to set the lighting, it took longer to compute each rendering pass. “What you want is quick feedback so you can do as many iterations as you want,” Rebours says. To make that possible, Conran wrote a GL preview that they named Layer Cake.

“We could position a light, see how it moves with the creature, and even render the entire shot frame by frame to see how it looks in motion,” Rebours says. The GL render didn’t reproduce the final RenderMan result exactly, but it was close enough for lighting design.

The tight schedule for this project made the Layer Cake tools especially important—the crew had from October to May to create the effects.

“In the life of a sequence, you first create creatures, viewpaint them, texture them, put in the animation, and then go into lighting,” Rebours says. “You discover the character of different creatures in animation. And, it’s the same on the TD side. You discover how the creature takes light and learn how to beautify the creature. Usually, you do that on three or five shots in production, but our schedule was so tight we did lighting on creatures in layout. We did this discover process very quickly.”

Art-Directing Fluid Sims

In addition to new lighting technology and techniques, the crew also used the studio’s Zeno tools and Maya to create rigid-body and soft-body simulations, explosions, debris, dust, and smoke, and developed new techniques for working with fluid simulations.

ILM matched the Stan Winston mechanical puppet of the T-800, which starred in some close-up shots, and turned its CG fighting machine loose. Stan Winston and his studio created the first terminator for the 1984 film.

“We have five fluid-simulation shots,” Conran says, “Our fluid artist, Lee Uren, was looking at all the knowledge from previous shows. Then, halfway through production, our R&D department came up with a new smooth-particle Hydro­dynamics (SPH) system, which is a particle system where all the particles are connected to one another, and Ian Sachs came up with a great lava-simulation technique.”

With this technique, a CG artist could place particle emitters set to initial temperatures that streamed down a trough. As the particles cooled, they became stickier, more viscous. By judiciously placing the emitters, the artists could cause particles to cool at the edges more than in the center.

“It was a stunning visual effect,” Conran says, “and so easy from an artist’s point of view. We’d get the shape from a fluid simulation, layer this liquid-metal particle sim on top, set the initial temperature, and it would give us rich colors and cooling. We’d get a beautiful simulation with black-body radiation—blacks with oranges and reds running in the middle.”

Long Shots

Those shots of molten metal were among the final shots that ILM finished. “We also had some that didn’t look so hard, with planes flying in a canyon sequence,” Conran says. “It looks like a traditional shot, but the whole canyon is digimatte.”

Although the various terminators are the most obvious effects in the film, among the 20 minutes of footage with digital effects that ILM created are long chase sequences for which the artists blended CG terminators, digital vehicles, digital doubles, and digimattes, that is, digital environments, with an astounding mixture of location photography and motion-control and miniature shots.

In one such blended sequence, for example, the harvester emerges from a gas station explosion, the Moto-Terminators scream out of its legs, and the vicious bikes chase after humans in cars and trucks on the road.

Snow details the visual effects work as the sequence plays on screen in an ILM viewing room: “The practical guys built a driving screen, a scaffolding thing with pots of gasoline that created a wall of fire.”

Snow continues: “We shot it with multiple cameras and cloned it to get an even bigger wall of fire. Then, as the CG harvester emerges, we added CG fire elements to help with the transition as it smashes through the gas station canopy, which we fully replaced with CG. You can see the flame breaking around him. The Moto-Terminators that come out are fully CG, but we inserted some over stunt riders that we had to paint out, so we had to reconstruct the road. The tow truck bouncing around is real, but we replaced the ball hanging down. The cars are real, and we shot a car being launched, but it went straight up in the air. So, we replaced it with a CG car that hits a truck and rolls over it. As the camera whips around, the CG bike throws itself on its side to dodge the rolling car, gets back up, and speeds off.”

The giant, headless terminator shooting from behind the building is a CG harvester. It houses the Moto-Terminators, which race out from inside its legs to round up humans.

For another sequence, ILM built a digital bridge and an 800-foot canyon beneath to blend and extend shots filmed in three locations—a bridge over the Rio Grande Gorge near Taos, New Mexico, a partial bridge set in Albuquerque, New Mexico, surrounded by red dirt, and a flat road—and then flew Hunter-Killers down the digital gorge and added practical explosions, actors shot on wires, and the harvester.

Still another sequence begins in Albuquerque on location, moves into the backlot for shots of a helicopter against bluescreen, and then back outside. ILM added miniature satellite dishes from Kerner Optical to the real backgrounds, and miniature backgrounds to the bluescreen footage. The artists cut from a real helicopter to a CG helicopter, and from Christian Bale to a stunt actor and back to Christian Bale, and they added digital elements, including a nuclear explosion to the background, to create the aftermath of a war.

One sequence takes place in the wreckage of San Francisco where, after the judgment-day war, Skynet built the manufacturing plants and refineries it needed for its processing center and bulldozed through the rest.

“The conceit is that there is a lot of computing power in San Francisco,” Snow says. “We even made jokes about putting it here at ILM because we have so much compute power. But, we ended up moving it downtown to give John Connor a longer journey.” The shot starts with Christian Bale riding a proxy Moto-Terminator filmed from a helicopter. ILM removed that bike, put him onto the beefed-up CG Moto-Terminator, blended to a digital double, then pulled back to reveal a digimatte painting of Skynet.

“These sorts of shots have precedence in things like the big, long shots you’ve seen in Children of Men and some of Brian De Palma’s work,” Snow says. “But, those are always intensely choreographed with a week of rehearsal and then shot all in one. We’re trying to give that impression, but with many disparate elements. It’s complicated to do. We’re using every one of the tools we like to group together and call D-Cinema—layout tools, rotoscoping tools, and the ability to reproject plates onto one another.” And to create the look, the energy-preserving shaders and HDRI.

High Contrast

“The new shaders were important to be able to match the real puppets on set,” Snow says, “but also to cope with the harsh look of the film. McG and [director of photography] Shane Hurlbut wanted this extreme look where the black levels go to hell and the highlights are beyond some of the dynamic range that we could cope with until a few years ago. So combining our energy-preserving shaders with HDRIs protected us in the harsh desert environment where the sun is many stops higher than the shadow side.”

On set, the ILM crew captured HDR images using five exposures three stops apart every time the lighting in the environment changed, as well as stills for photo­modeling the environments and shots of the chrome and gray spheres. “I like to have the chrome and gray spheres because they pin down the lighting when you shoot the plate,” Snow says. “And because we scan them, they give me something to color-match the HDRIs to. Also, it’s our only way to capture really dynamic lighting environments like the explosions and sparks in the factory scenes. We’re looking into ways to capture dynamic lighting as HDRI data, but we didn’t have anything ready for this film.”

To be certain final shots would hold up after DI, Snow had ILM create two versions of each and sent both versions to McG: the neutral version they worked with in production, and a DI proxy version that matched the super-high contrast, desaturated look that McG and Hurlbut wanted.

“We knew going in it would be harsh, so we used the more energy-conserving shaders and higher dynamic range images in our look development,” Snow says. “And in compositing, we created the two looks. The neutral grade gave them maximum flexibility, and the DI proxy showed that our black levels were robust enough to hold up. So, the shots dropped right in, and we’ve had no kickbacks from DI. It’s the first time we haven’t had to go back and redo some shots.”

Each year, as tools become more sophisticated and compute power increases, CG artists have the opportunity to experiment with the art of visual effects in new ways. The look of this film and the kind of visual effects it incorporates provide a dramatic example of just how far the industry has come.

Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at
BarbaraRR@comcast.net.