Animation Evolution

If the young reporter Tintin, star of the comic-book series by the Belgian artist Hergé and, most recently, an animated feature film, were to write about the making of that film for his newspaper, Le Petit Vingtieme, he’d surely headline it: “First animated feature directed by Steven Spielberg! First animated feature produced by Peter Jackson!” And then we’d see the headline especially interesting to those in computer graphics: “First animated feature created at Weta Digital!”

Or, is it? It wouldn’t be much of a stretch to call large portions of Avatar—the most successful film of all time, also largely created at Weta Digital—an animated feature. After all, in much of that film, the Na’vi are animated characters in a virtual environment. And, as they did for
Avatar, Weta Digital animators performed
Tintin’s characters using data captured from actors wearing head rigs as part of a facial-capture system developed at the studio. Award-winning directors famous for live-action, action-adventure movies directed the actors for both films on a performance-capture stage set up by Giant Studios and “filmed” them with a virtual camera while watching a real-time, on-set composite.

“[Tintin] was really an evolution of what we’ve done for visual effects,” says Joe Letteri, senior visual effects supervisor at Weta Digital, who received Oscars for the work on
Avatar,
King Kong, and the two
Lord of the Rings trilogies he supervised. And therein lies one of those clues that Tintin and his dog Snowy so famously uncover: A clue to the reason critics are praising it as the most successful performance-capture film to date. Letteri brushes off the distinction.

“We rolled straight into what we had done for Avatar,” Letteri says. “We developed a new subsurface technique for the skin to have it look a little better, we developed some new facial software to add a layer of muscle simulation beyond what we could track and solve from the facial capture, and we developed a new hair system that we also used on
Planet of the Apes. But, from a performance-capture point of view, we are still recording an actor’s performance. It was no different from mapping data to the Na’vi or an ape. We were making comic-book inspired characters, not ones that looked like humans, but there’s always a level of animation and interpretation. We had big sequences in
King Kong that were entirely computer-generated, most of the scenes in
Avatar were entirely in a CG virtual world, and
Tintin is in a virtual world all the way. For us, there’s no difference.”

Tintin was successful two months before opening in the US. The film’s approval rating on Rotten Tomatoes hovered around 86 percent as it topped the international box office during the first two weeks following its release in Europe, and by the end of the third week,
Tintin had captured $159.1 million at the box office, even though it had yet to open in the US or many other regions.

Presented by Paramount Pictures and Columbia Pictures, the film is a rollicking action-adventure that sends Tintin and his dog Snowy dashing through Europe and Africa, on ships, trains, and planes, and even into the past, and a comparison to Spielberg’s Indiana Jones films is apt. It stars Jamie Bell as Tintin; Andy Serkis as the whiskey-soaked Captain Haddock; Daniel Craig as Ivan Ivanovitch Sakharine, a pirate and a descendant of Red Rackham (whom he also plays); Toby Jones as the pickpocket Silk; Simon Pegg and Nick Frost as the bumbling detectives Thompson and Thomson; and Snowy, a little white terrier who is Tintin’s constant companion. All the characters are CG; Snowy is the only star performed entirely with keyframe animation.

And yet, everything about Tintin, except for the fact that it is an animated film, has a live-action sensibility. The characters have a cartoon patina, and their performances are a bit broader than a human’s, but the artists started with real performances and then referenced reality to add skin, clothes, and hair. For environments, the crew didn’t have live-action plates, so they referenced the comic books for design and the real world for textures and dynamics. The film may trace its origin to comic books from the early 1940s, but this is not your father’s animated film. The attention to detail is amazing.

“We ran this show exactly like every other show,” says Simon Clutterbuck, digital creature supervisor, “as if we were doing 100 shots in a visual effects movie. The focus on every texture, every motion, every simulation was intense. We never said, ‘Oh, that’s done,’ and locked an asset. We looked at everything every day. We had a process where things ran in parallel; we even built while lighting. If something in a shot needed to change, we changed it. All the way through production, shots constantly evolved and got better and better.”

Animators at Weta Digital started with performance-capture data for all the characters except
Snowy, the little white terrier. Steven Spielberg directed actors who performed the characters
on a motion-capture stage using a system at Giant Studios similar to the one James Cameron
had used for Avatar.

A Model Production

A team of modelers that ranged between 40 and 60 people built the 4000 digital assets needed for the film, creating face shapes and deformations for the animators and adding fur and hair to the characters. Modelers at Weta Digital work within an Autodesk Maya pipeline. Many modelers also sculpt using Autodesk’s Mudbox, originally developed at the studio, and a few add Pixologic’s ZBrush to the mix.

Modelers moved back and forth between hard-surface models and characters, although one team specialized in fur and hair, and another in creating face shapes and deformations. Also, the modelers gave the characters especially detailed hands. “We had amazing reference—an MRI and a life cast of a guy’s hands that we used to build new, high-fidelity hand models,” Clutterbuck says.

The main character, Tintin, had the most difficult face to model. “He’s a balloon with two dark eyes and an oval mouth,” says Wayne Stables, visual effects supervisor. “That worked for Hergé. But we had to develop a three-dimensional character.”

The artists started with the 2D character, picking frames that, when combined like a flip book, created a three-dimensional look. Next, they translated that look to a rigged CG model, asked an actor to mimic Tintin’s expressions from the comic books, and applied those expressions to the 3D model. “Then, we began exploring changes,” Stables says. “We changed the model’s nose and gave Tintin cheekbones and a jaw. By the time we had a Tintin we liked, we had tried 1600 variations.”

In addition to the main characters, the modelers built hundreds of crowd characters. “We created new characters all the way to the end,” says Marco Revelant, models supervisor. “I remember adding a female character in the last month. We generate models from the same elements, even using the same topology for the main characters and the generic characters. The distinction between a main character and a crowd character is in the complexity of the facial system, not in the model itself. For the generic characters, we have an automatic way to generate a basic facial system.”

To rig the bodies, character technical directors worked with a generic model, which they call “genman,” a fully simulated muscle model. Two creature TDs rigged all the characters, one working on Snowy, the other on the human characters.

“We hadn’t done a dog, so that was a full-time job,” Clutterbuck says. “But we had done a lot of development on bipeds forAvatar and had a good genman model. We used it on
Apes, but we took it to an extreme for
Tintin and built everyone from the same guy, all procedurally. We started with a surface model and used a process we call ‘warping,’ to fit the whole rig from a base model to the new model, and it’s good to go. If we weren’t happy with something, we’d fix it on the template and push it out to all the characters.”

For Avatar’s nearly naked Na’vi, the crew had developed Tissue, a simulation system, to build muscles, skin, and fat. “It’s a linear-elastic finite-element system,” Clutterbuck says, “a stand-alone thing with a front-end bolted onto Maya so artists can interact with it. We plug animation into the system and it adds the simulation on top; it’s our tool set for deformation work.”

For Tintin, though, the crew pushed the system further to add dynamics to facial deformations driven by the captured data and keyframed animation. The developers plan to submit a technical paper to SIGGRAPH 2012 on the technique.

“We wanted wobbly cheeks, chin folds, skin colliding with itself around the facial area,” Clutterbuck says. “To get that, we needed both dynamics and facial deformations. So we took what’s effectively a series of blendshapes rigged in the facial puppet and mapped them into the simulation system to add simulated elements to the face.”

To control the constraint-based simulation, artists painted attribute maps on the facial puppet. Clutterbuck gives an example: “We have a [jowly] character named Barnaby, and we have the performance for his chin and lips, but we wanted those areas to interact with his wobbly chin. So, instead of trying to do two separate solutions and blend them, this system unified everything. We painted little patches around his lips, and the attribute map set up everything once. After that, the simulation was procedural. The solver can also wobble, wrinkle, and buckle all at the same time. The animators didn’t see any of this; they concentrated on the performance.”

Perfecting Performances

The animators received data for the characters’ faces, bodies, eyes, thumb, index, and pinky fingers, captured performances that provided what animation supervisor Jamie Beard calls a “starting block.” Beard worked on Tintin for five years, supervising the previsualization and then leading the team of between 50 and 60 animators.

“We offered the director an animated and a live-action world,” Beard says. “On set he could be a live-action filmmaker, blocking out the actors and directing them. Once captured, if the scene was perfect, we’d work on the performances only a limited amount to change them slightly if Steven [Spielberg] wanted to tweak them; directors being who they are always have more ideas. But, we’d always go back to make sure we hadn’t detoured too far from the original essence. If we had given Tintin a bigger smile, we made sure he still had Jamie Bell’s performance.”

Finding the balance between Tintin’s stylistic photorealism and reality was the challenge. Unlike for
Avatar, in which the animators wanted the audience to see Sigourney Weaver in her avatar’s face,
Tintin’s animators needed to apply the facial system to cartoony characters. “We had to cross a threshold,” Beard says. “We have the actors’ performances, but the look comes from Hergé. We wanted those performances, but we had to fit those performances on characters that didn’t look like the actors. That’s when the artistry of the animators came in. We used the same fidelity of data captured from the small cameras that we used on
Avatar only in a completely different way, taking Steven’s direction to fit the expressions and make an animated film. But, you can still see the performances they captured on the characters. We spent a lot of time learning how to move the muscle system for our cartoony humans.”

In addition to the main characters, the animators also manipulated data captured for crowds. “Once they finished principal photography [performance capture] for the main actors and the shot was cut together, they would capture actors for the crowds,” Beard explains. “For the pirate battle, which needed 120 people, I had six actors. We’d do multiple passes with those six people to fill up the scene.” Similarly, to fill marketplaces in England and Morocco with crowds, the crew captured six people at a time.

Because the entire world is digital, the animators also worked on other elements as well—cans of paint rolling on the floor, coins, ships in the ocean, and so forth, animating by hand all the props and vehicles that couldn’t be animated procedurally. “Procedural animation doesn’t lend itself well to comedy,” Beard says, providing an example. “We had a scene with sleeping sailors on bunks, and they all had to be flopping in their bunks, snoring. The bunks drifted around, and the chains moved independently. The animators who were assigned to that scene had their eyes roll back in their heads. It all had to look natural and slightly comedic. It was a real task.”

Beard divided the team by shots, choosing those that reflected particular animators’ skill sets. “Some people were skilled at animating big, heavy scenes, so they would do everything in those shots,” Beard says. “And, I had some fantastic animators who had a really good handle on Snowy. One strong animator, Aaron Gilman, knew Snowy very well. Aaron has lots of energy, and he’s inquisitive, and the more I talk about him, the more I realize that he is Snowy. He fit the role perfectly.”

Back Story

Steven Spielberg discovered Hergé’s comic books and became a fan after a reviewer in France compared the first Indiana Jones to “Tintin.” In fact, when Spielberg and executive producer Kathleen Kennedy first approached Weta Digital about making Tintin, they planned to make a live-action film.

“The idea was to have us create Snowy,” says Joe Letteri, senior visual effects supervisor at Weta Digital. “So we shot a test with someone from Weta Workshop dressed in a Tintin costume and started on a realistic digital version of Snowy. But, in the meantime, I talked to Peter Jackson and came up with the idea of having him on camera auditioning for Captain Haddock, with Snowy stealing the scene from Peter.” The scene was a tip of the hat to Hergé, who often had Snowy steal scenes from Tintin in his comics.

Letteri first showed Spielberg the test that the director thought they were working on, and then the test with Jackson. “Steven said to Peter, ‘OK, we’re working together,’?” Letteri says. Thus, the two directors/producers began exploring ways to make Tintin’s world together, and as they talked, Jackson began suggesting they make it digital. Spielberg was cautious. So, Letteri and Jackson arranged a test.

“By then, we had finished King Kong, and we were getting ready for Avatar, so we asked Jim [Cameron] if we could bring Steven [Spielberg] over to have a look,” Letteri says. “Jim gave Steven and Peter the stage for two days during Thanksgiving break, and that got the ball rolling.” And it rolled all the way into an animated feature created with computer graphics, a film, given the antics of Snowy and the wild action scenes, that could never have been made with live-action photography.

Spielberg and Jackson shot the film on a performance-capture stage at Giant Studios in Los Angeles using Giant’s motion-capture technology and the head-rig hardware and software that Weta Digital had developed to capture facial performances for Avatar.

“Steven was on stage directing, and Peter checked in remotely because he was still working on Lovely Bones.” Letteri says. “They would confer and work out from day to day what to do next. Peter stayed involved for as long as he could be, but he had to go off and prep Hobbit, and he was involved with that as we finished up. He’d still review things and give notes, but our daily calls were with Steven.”

Letteri continues: “I think Steven enjoyed the process. It was freeing to go in and work like he was used to working with the actors and camera, to explore scenes quickly, and then he kicked back to us the things that take a long time. He didn’t have to travel or wait for sets to be built.”
–Barbara Robertson

Scene Stealer

Snowy, the only hand-animated character in the film, appears in most of the scenes with Tintin, sometimes even driving the story. Hergé based the dog on a wire fox terrier, and like that breed, Snowy is intelligent, active, and mischievous. As in the comic books, he’s a scene stealer.

On the performance-capture stage, a puppeteer moved a toy version of Snowy for blocking and giving the actors proper eye lines. In addition, Beard put cutouts of printed images of Snowy on cardboard stands near Spielberg’s monitor to remind him that Snowy would play a big role.

As in the film, at Weta Digital, Snowy often drove the story. “There’s a fine line between a photoreal dog and the caricatured animal in the comic books,” says Clutterbuck. “Finding that balance took a reasonable amount of time. We’d build him, animate, and render him, show him to Peter and Steven, and then fine-tune his proportions until we had a real animal that was also Hergé’s Snowy. It was a full-time job.”

Inside, Snowy has cutting-edge technology. His canine anatomy required a new simulation model because, unlike humans and apes, dogs don’t have collar bones; the shoulder bone—that is, the scapula—is disconnected. A fascia, which is a connective tissue, surrounds groups of muscles, blood vessels, and nerves, and holds them in place.

“We had to build a fascia system that was like a tissue layer that enveloped the muscles,” Clutterbuck says. “Now you can see the form of Snowy’s shoulder down to the elbow changing under the surface of the skin. Richard Dorling [lead software engineer for creatures] developed key muscle models that he attached to the skin to get the surface doing the right thing.”

Before giving Snowy’s performance to the animators, the crew tried motion-capturing a dog. “We did only one motion-capture session and then realized it had to be animation,” Beard says. “You’d think motion capture would free you up, but the dog on the live-action set would be led by a trainer and would look up at the trainer. To get the real terrier attitude of Snowy, we had to animate him all the way. He became one of those characters the animators could really put themselves into. We kept thinking of things Snowy could do to keep people entertained.” In fact, Snowy’s antics are one justification for making an animated feature rather than a live-action film.

For reference, the animators visited local dog clubs, brought dogs into the studios, watched videos, and, of course, read Hergé’s comic books because although Hergé based Snowy on a real dog, he was a comic-book character.

“Snowy has human characteristics in the comics, particularly in his eyes and brows,” Beard points out. “This isn’t a world with a one-to-one relationship with reality. We would start animating with him and find his nose had to be smaller or bigger, and then we would go back and animate him again. And it was hard to light a character with white fur. His eyes would become two black dots, and we couldn’t see what he was thinking. So, we had to keep going back into it and reworking until we could read his expressions, making sure his fur wasn’t changing his performance.

Modelers referenced Hergé’s reference materials, found photographs of the objects he
referenced, and then created period-appropriate CG vehicles in the same style as those in
Hergé’s comic books.

Hair Today

Revelant, who was in charge of the hair and fur team from the modeling side, has been working with fur at Weta Digital since King Kong. After
Avatar, he and code department supervisor Alasdair Coull worked on a prototype system called Barbershop that Coull then took to completion. “The system we had was a problem because it had a long learning curve, and only a few people could use it properly,” Revelant says. “Barbershop really helped with Snowy.”

Hergé’s Snowy has a simple design; he’s white, with no shading. “He’s like a cloud that a kid would draw,” Revelant says. “He’s defined by his outline. We found reference of the dog Hergé used as reference, but the problem was that Snowy doesn’t look like that dog. So, we had to figure out two things: What was under the fur, and how was the fur going to work. We’d take the model, apply the fur, look at it, change the model, and transfer the fur to it, back and forth.”

With the previous system, the artists would have had to place guide hairs that multiplied into thousands at render time, and after rendering, the artists could not move any one of the resulting strands of hair. With Barbershop, each of Snowy’s million strands of hair could be a curve with which the artists could groom the terrier’s rough coat. Similarly, digital barbers used the system to perfect Tintin’s iconic coif.

“The concept is that what you see in Maya is what gets rendered,” Revelant says. “You can see the full density in Maya, although artists can reduce the level of density as they refine the look. And, we use an OpenGL shading scheme that gave us a good representation of the lighting while we groomed; it uses the same algorithm we use on our [Pixar RenderMan] side. We don’t interpolate hair; there is no creation of hair after we finish grooming.”

When hair and fur groomers “brush” the hair, they move control points but at any time can convert the hair to curves and manipulate the curves, as with any Maya primitive. “You can basically use the brush to give parameters to the hair,” Revelant says. “You can comb it the length you want, straighten the curve, or curl it. You’re not painting on a map; all the information stays in the hair.”

One advantage of the system is that it is independent from the underlying mesh, which means that changes to the UVs in the topology do not necessarily affect the fur. It also means that the artists could transfer the hair groom for one character to another and generate variations without much hassle. “We can even merge one groom with another and create a third one,” Revelant says. “We used that a lot for the crowd characters.”

Hair Lights

In addition to having tricky grooms, Tintin and Snowy also had the most difficult hair to render. Tintin’s light-red hair could easily look too blond or too dark. And Snowy’s hair is white. They were the first two light-haired main characters Weta Digital had encountered, and their hair demanded new shading models.

Jedrzej Wojtowicz supervised a team of 16 people in the shading department who, with the help of R&D, dealt with the issue. “The problem was the scattering of light,” Wojtowicz says. “Previously, most of the hair we created was dark, so we could have simpler models than we needed for Tintin. Imagine a hair fiber as a metal tube. If I shine a light on it, it reflects that light; the light bounces back in a straightforward fashion. That’s synonymous to black hair. Light-colored hair is closer to a candle, a cylinder that’s partially reflective but allows light to travel through it. As some of the light travels through, it picks up some of the coloration and bounces out with a different color. The rest of the light travels though in a straight line and absorbs some color. So the problem was how to model the interaction between hundreds of these highly light-scattering hairs. What does the light that picked up color from the first hair do when it bounces into another hair?”

And that’s only part of the problem. As the light propagates through a volume of hair, the color it absorbs varies depending on how rough or shiny the hair is. Rougher hair scatters light in more directions than smooth hair, the energy spreads and imparts a different quality and amount of light to neighboring hairs.

“This happens in real life,” Wojtowicz says. “Our goal was to imitate it as best we could using materials we can generate by studying photography and by doing spectral measurements. Nature sits as a precedent; that’s why we attack things from a physically based way. If we have to make assumptions after the fact, we will.”

To solve the problem, the shader developers moved from a model based on light interacting with a single hair fiber to a dual-scattering model. And then, they found ways to create shadows within the volume. “We had worked on scattering the light between the hairs, but what if the character’s hand blocked half the hair?” Wojtowicz questions. “How do the scattering and absorptive techniques work with our shadowing techniques? Each hair had to ask, ‘How exposed to light am I? How deep in the volume?’”

To move the hair based on the characters’ actions or on elements such as wind in the environment, the character team used Maya nCloth for dynamic simulations, along with various other methods. “We had different models for different things,” Clutterbuck says. “Hair in the wind took one simulation approach. Snowy took another. And, Barbershop has a deformation interface built into the grooming tool, so we can deform the hair any way we want. For a shot when Tintin walks past a mirror and combs his hair with his hand, we built an animation puppet that we plugged into the animation system to deform the hair. We used a bit of everything”

Weta Digital’s hair groomers controlled coifs, coats, and beards with a new “what you see is
what you get” system called Barbershop. Tintin and Snowy’s light hair caused researchers and
character effects TDs to devise new shading models to more accurately scatter light through
the volumes.

Skin Tight

Weta Digital artists leveled the same degree of attention to detail to create the characters’ digital skin and other textures in the Tintin environment; however, this process derived from the physical world, not the digital. Gino Acevedo, creative art director and textures supervisor, devised the technique for Avatar and enhanced it for Tintin: He does life casts to capture fine details, and then uses a process to scan the result into Adobe’s Photoshop to make displacement maps.

For Avatar, Acevedo used a material made from seaweed. For
Tintin, he switched to a silicon-based material that he says captures 30 percent more detail than the material he had used before. “I made a huge library of skin patterns—faces, elbows, knees, backs, fronts, butts, feet,” he says. “And the great thing about the process is that it works for rocks and trees. We used it a lot for the tree bark. I’d take my little bucket of silicon and slather it on the sides of trees, then peel it off. It works incredibly well—so much better than scanning.”

To capture textures for Tintin’s face, Acevedo started by painting a thin layer of the silicon material on someone with what he calls “interesting skin,” leaving the volunteer’s nose open. The material sets quickly, and once set, he applied plaster bandages over it to create a model of the face. Then he removed the plaster cast, which doesn’t stick to the silicon, carefully peeled off the silicon, and placed the thin layer of silicon in the plaster cast, which acted as a cradle.

Next, Acevedo brushed a two-part mixture of urethane into the negative face cast and sloshed it around until it set. “I usually do a couple of layers to build up the thickness and create a shell,” he says. “Then I reinforce it even more with a rigid polyurethane foam that I pour into the back. It takes up the space and sets up in a few minutes.”

When Acevedo removed the plaster bandages and peeled the silicon skin from the urethane, he had a perfect cast of the person’s face, “every nook and cranny,” he says. But, he wasn’t done yet. Next, Acevedo mixed a transparent silicon material, the same type of material used for animatronic puppets, until it was as thick as honey, and poured it over the face cast.

“I prop it up and use an air hose to blow [the silicon] around to get an even consistency,” says Acevedo. “When I come back in the morning, it’s cured. When I pull it off, from the top of the forehead down, I get a skin the thickness of a latex glove. It’s a copy of the face. If you hold it up to the light, you can see all the skin detail.”

The next task was to digitize the silicon skin. “We cut darts into it to lay it on a flatbed scanner,” Acevedo says. “It looked like a texture map.” Even so, it wasn’t completely flat, so they modified the scanner.

“We cut pieces of Plexiglas to build a wall around the top of the glass and filled the void with baby oil,” Acevedo explains. “We put another piece of glass on top and got perfect scans: 8k-resolution maps with incredible detail.”

Then, in Photoshop, artists removed any dust, scratches, and air bubbles, and amped up the contrast to create the displacement maps. “For the most part, though, the scans were 85 percent ready to go,” Acevedo says. “We saved them online in a library for the artists. When we started a character, say Captain Haddock, we would look at all the scans of people with crow’s-feet and pick one. Then, in Mari, our 3D paint program, we would move the texture around and paint the displacement onto the model.”

Tintin, who has younger, smoother skin, created special problems. “Tintin aged all of us, but I think what we ended up with looks good,” Acevedo says. He explains: “People with perfect skin are very difficult. He’s a redheaded kid, so we thought maybe he should have freckles, but he looked too much like Howdy Doody. So, we started studying young people’s skin to find some details we could use. Tintin now has little scars, like maybe he had a little chicken pox, and very subtle freckles you don’t notice when you first see him, but if they weren’t there, you’d know.”

They also experimented with his skin color. “We had different masks for his cheek area to give him a rosy blush from time to time,” Acevedo says.

To develop shaders, the team started with those used on Avatar. “Even though Jake was blue and Tintin close to pink, we knew the specular qualities of the skin, the technical setup and structure, and how to exploit RenderMan in the best way,” Wojtowicz says. “We could transfer all that. All the characters then veered from that, but at their core, we started from a unified base in terms of the technical structure.”

A new subsurface scattering model helped give the fleshy characters in Tintin a more realistic look, and even helped Snowy. “We had used a dipole model through
Avatar,” Wojtowicz says. “That gave us shallow scattering. The new model allowed us to scatter light at a deeper level for different extremes. We could get good-looking candles and have dark-skinned characters, as we do in Africa. It also gave us the ability to give Snowy’s ears that nice pink glow; if we backlit characters, the light would scatter in a more aesthetically pleasing way.” The research into the new subsurface scatting model resulted in a SIGGRAPH 2011 technical paper titled “A Quantized-Diffusion Model for Rendering Translucent Materials” by Eugene d’Eon and Geoffrey Irving.

Costume Department

All the characters except Snowy, of course, wear period costumes, and 15 people worked on those digital costumes, creating patterns for all the garments, dressing the characters in multiple layers of clothes, and simulating the movement. “When you look at Tintin, you forget that the guy is wearing a three-piece suit,” Clutterbuck says. “It’s just there, and you expect it to do the right thing. But, it represents years of work.”

The studio used nCloth in Maya for the simulation, augmented with proprietary software. “We’ve never done clothing to this scale,” Clutterbuck says. “The Na’vi wore loincloths. We started thinking that if we can’t see a shirt under a jacket, we wouldn’t need to simulate it. But you don’t get the right look. So, all the clothes are real; they all have dynamics. We solved the shirt, under the jumper, under the jacket, altogether.”

For cloth textures, Acevedo scanned materials directly. “We had a wardrobe department that made the costumes and put them on models so the creatures department could take videos of the clothes and see how the different types of material moved. We did scans of those materials and used them for the textures.”

A multi-step method that begins with life casts resulted in libraries of displacement maps that
artists could draw from to produce skin textures for characters ranging from craggy Captain
Haddock to youthful Tintin. The artists captured tree bark and other textures from the real
world, as well.

Hergé’s World

The artists took as much care with the environments as they did with the characters, carefully creating a world that respected the world Hergé had drawn. This was possible in part because, in addition to the comic books, Hergé’s [Georges Prosper Remi’s] estate gave Weta Digital access to the artist’s original references. “Hergé had a realistic style, but quirky,” Letteri says. “The way he worked was similar to the way we work as visual effects artists. He’d gather all this reference and create, say, a tank that would be a mix of a couple of tanks he liked. We saw his old photos, so we would try to find the objects he photographed. We looked for additional photos as well. We’d figure out the way he drew the object, and then fill it out in three dimensions. It was a really good project.”

Tintin’s apartment, for example, which the artists modeled and textured to match artwork from the comic book, has a phone based on the phone Hergé used as a reference. The cars, the street where Tintin lives, the market are all part of the same European style that Hergé used. “We based everything on reality,” says Stables. “If we don’t have reference for something, it doesn’t exist.” At the VIEW conference in Turin two days before the film opened in Italy, Stables demonstrated the crew’s determination to match Hergé’s world by overlaying a 3D building from the film on a page from the comic. The two matched perfectly.

“The assets in this film represent a huge effort from the research and modeling side,” Revelant says. “We have a way to dress the sets procedurally, but generally we hand modeled everything. We went through all the panel art to find the buildings Hergé drew, and looked for references for buildings with the same style and shape. When you go to that level, procedural is not an option. You want to do it right.”
Stables supervised much of the work in Tintin’s apartment, inside a ship, and an exciting chase sequence through a marketplace, but the film also puts Tintin on pirate ships and in the middle of a pirate battle. Another supervisor, Keith Miller, handled the neighborhood outside Tintin’s apartment, several shots of a seaplane taking off and flying through a storm, and 85 shots in the pirate battle. All told, five visual effects supervisors split the work on the film.

“Water was the most challenging,” Miller says, “particularly for the pirate battle. We tried to keep it as photoreal as possible. The previs stylization was non-physical, so we tried to maintain that character yet preserve the natural aspects of water.” To do that, the team updated its Fast Fourier Transform (FFT) library with new algorithms to simulate the waves and created Smoothed-Particle Hydrodynamics (SPH) simulations for the cresting foam. “We used [Exotic Matter’s] Naiad for hero simulations and interactions when we’re disturbing surfaces with sinking objects,” Miller says, “as well as our own Synapse software.”

Concept art from Michael Pengrazio, an artist whose first matte paintings were for Star Wars: Episode V–The Empire Strikes Back in 1980, and who worked as an art director at Weta Digital on several live-action films starting with
King Kong, helped everyone visualize the world they wanted to create. “When you look at his work, it seems plausible,” Stables says, “like a day I could photograph.” Even concept art needed to look real.

“I felt like I was making a live-action movie,” Stables says, “like I was making an Indiana Jones film, even though we were animating. The way we approached the show—from effects, to simulation, to lighting, to the camera—was to base everything in a plausible, realistic way, with the idea we could take liberties. Steven [Spielberg] is a live-action director. His world has been in live action and film, and live action is a world we understand. The fact that we’re using animated characters and we aren’t filming backgrounds didn’t make any difference. We’re composing and lighting as though we were on a live-action film. The biggest issue for me, though, was the interiors. We had to push our indirect illumination.”

Using RenderMan, the lighters sent rays inside a point cloud, which was a simplified color version of a scene. “Then for final beauty renders, the surface shaders did a lookup into the point cloud to do the indirect illumination,” Stables says. “For shadows, we used our PantaRay to generate big point clouds. When the shader executes the final beauty pass, the specular looks up into the point cloud, as well. It’s not a mirror type of reflection. We weren’t doing caustics; we weren’t bouncing specular around. But we were getting a glossy reflection.”

The test case was a sequence that takes place within a ship’s corridors. “We couldn’t get away with just diffuse light,” Stables says. “We had to account for specular light. We couldn’t do the kind of cheating and magic lights we might have done in CG. We didn’t want to, and also, Steven Spielberg is extremely particular about lighting.”

The indirect specular and indirect diffuse lighting were especially important for lighting the characters. “Because specular is angle-dependant, it’s really the main component that allows you to read the shape of an object,” Wojtowicz says. “So a lot of our look development centered around dialing in the specular qualities to their best, especially with Tintin. In the comics, his face approximates a sphere, and to be faithful to a degree to that, he’s geometrically simple.”

The more haggard characters, like Haddock and Sakharine—older, more mischievous, with interesting geometry in their faces—are easier to light. Tintin’s simple, youthful face gave the lighters nothing to hang shadows on, no angles. “We had to squeeze details from a wide array of techniques, and one of those was having an intricate specular response,” Wojtowicz says. “If we were to put Tintin in his apartment with its walls of brightly colored wallpaper, and put a couple of hot light sources at either end, the entire room would light up and wash him out with all the diffuse light contribution from all the angles in the room. So, if we don’t have a specular reflection, we lose his shape. We even used indirect specular in exterior scenes when we needed to increase the visual complexity of an object moving through the scene, or the camera moving through the scene. We were more selective because it’s a bit more expensive in terms of render time, but we did use it.”

Because Tintin chases through several countries during the film, the lighters faced situations ranging from the desert in the middle of the day to overcast oceans, and all the lighting needed to interact in a consistent manner with the new hair shading models and the new subsurface scattering models for the skin.

All this attention to detail—the new muscle system for the characters’ faces and Snowy’s shoulders, capturing skin textures, new hair and fur systems, new shaders for hair and skin, the 1600 variations of Tintin that it took to produce a character that looked right, the research into reference materials and research into scientific methods, and more—combined to make a film that critics such as Variety’s Leslie Felperin praise: “The motion-capture performances have been achieved with such exactitude they look effortless, to the point where the characters, with their exaggerated features, almost resemble flesh-and-blood thesps wearing prosthetic makeup.”

The challenge for the water-simulation team was in creating photoreal water in a comic-book
style. An updated Fast Fourier Transform library for the waves, Smoothed-Particle Hydrodynamics
for cresting foam, Exotic Matter’s Naiad for hero interactions, and Weta’s own Synapse
fluid-simulation software helped.

Asked how he was able to keep the characters in Tintin out of the notorious uncanny valley, Letteri’s answer is, “We didn’t try. We weren’t thinking about it. To tell you the truth, the question only came up when other people started asking about the movie. For us, these are just characters we like to watch. They either work or they don’t, and if they don’t work, you can call it whatever you want. When you’re working on a film, you’re focusing on the specifics. Is that eyelid doing the right thing? Is that lip doing the right thing?”

But certainly the studio’s experience with live-action films, with the rigors of matching the real world and often substituting virtual for real, had an effect. “In live-action films, when you have a visual element that isn’t real, it’s becoming easier to create the reality and what’s around it digitally,” Letteri says. “The whole shot becomes digital, and most people don’t know the difference—and that’s the interesting part. It doesn’t matter. So, it’s hard to define the lines these days. In a way, that’s what Jim [Cameron] was trying to do with Avatar. There should be no barrier moving between these different worlds.”

“But,” Letteri continues, “live-action visual effects ground you. You have a photographic plate. You judge everything by the pixels next to it. You know when it doesn’t work. And I think that was the hardest thing about [making an animated film]. If you’re going to try to make it look real, you need a touchstone for reality. In a world that’s completely digital, it becomes easy to convince yourself that something looks good because it looks better than the last time you saw it. But if you put it next to something real, it doesn’t [look as good]. So we couldn’t let ourselves be convinced. Because we come from visual effects, we strive for accuracy, to make everything believable. We photographed lots of reference. We constantly judged against something real. When we needed to know what Tintin’s hair looked like wet, we persuaded someone with red hair to cut it like Tintin’s and soak his head in a barrel of water.”

There you have it. If you want to stay out of the uncanny valley, soak a redhead in a barrel of water. And then hire the best artists and researchers you can find, ones who work meticulously for years to make the world on the movie screen seem real.

Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at BarbaraRR@comcast.net.