Rocket science: Meet the CG supervisor behind the furry fan favorite

Rocket Raccoon is one of the Marvel Cinematic Universe’s most unlikely heroes. The genetically modified creature is an ill-tempered, poor-mannered and belligerent career criminal. But over the course of Guardians of the Galaxy, Guardians of the Galaxy Vol. 2, and Avengers: Infinity War, and his relationships with Groot, Peter Quill, and Thor, he’s matured into an altruistic fan favorite.

The character’s success is down to his believable characterization, hints at a traumatic past, Sean Gunn and Bradley Cooper’s performance capture — and the fact that he looks and moves just like a real talking raccoon.

One of the people behind Rocket is Nate Shaw, a CG supervisor at Method Studios, which handled the character for Guardians of the Galaxy Vol. 2, and Avengers: Infinity War. Nate’s role was to oversee Rocket’s fur pipeline and his fur shader in V-Ray.

We caught up with Nate after his inspiring talk at Total Chaos to ask him about working with the sweet rabbit.

I actually did a groom of a CG dog on Divergent, which is a movie Method worked on years before Guardians of the Galaxy Vol. 2. It was a full CG shot, and I was working on that right when Guardians of the Galaxy came out. I remember looking at my dog, and comparing it to Framestore’s Rocket, and thinking: “That’s really inspiring. And dang, they’re good!”

I never thought that I would get that chance to then work on Rocket, and I learned a lot in the process. I can't take credit for it — a lot of people worked on it. But it was very exciting. I feel close to that character for sure. I spent so much time looking at him.

Do you have any Rocket Raccoon toys?

I don’t! I have a small apartment, so I try to avoid as many things as possible. But if I were to buy one, it would be Rocket.

Could you explain Method’s relationship with Framestore?

Framestore did the original asset for Guardians of the Galaxy, as well as the initial build for Guardians of the Galaxy Vol. 2, so they just sent us a lot of data. They sent us hair caches, which was every single hair. Then they sent us guide hairs, although that wasn't as useful as every single hair. They sent us texture maps, meshes for skin and suits, and they were really helpful. It was nice working with Framestore because we would get on calls with them, they were in London, and we were in LA, and we had a really good working relationship.

It was so collaborative, even between studios, and they were very forthcoming with any questions we had.

We had still frames from Framestore. They gave us their look dev environment which had all their geometry, the Maya file, and their poses for Rocket, so we would just put Rocket in there. Then render him for the same cameras, and go back and forth in Nuke until they looked identical.

Rocket’s movements are created from facial capture of Bradley Cooper, and motion reference of Sean Gunn. Do you have to do much hand animation on top this data?

For Rocket's performance, it was hand keyed, so it took a lot of animation time to do that. We had face shapes from Framestore, and then we just made our own if we needed extra blend shapes.

What was the biggest challenge?

On Guardians of the Galaxy Vol. 2, we had never done a furry creature that was this hero, this close-up. He needed to perform in such an emotive way, and be in so many shots. It was a volume of work that we had never done before, and it took us to the next level of challenges.

The animators work with Rocket as a mesh, and then they have a fur volume, which is a skin of Rocket, but replicating the dimensions of his hair. That's how they would animate and then they would publish the animation, hit a button, and export their animation to pass on to the next department who would create an Alembic. Our pipeline bases a lot of things on Alembic, especially downstream departments from animation. Even just the skin was an animated Alembic cache.

Then we would do that hair bind, and that would create hairs. On both movies that we had this post process to give the lighters this extra cache. In Guardians of the Galaxy Vol. 2, it was Alembics out of Houdini that went through a proxy. The second movie, Avengers: Infinity War, we would run it and do the bind in XGen, and that was VR scenes that were rendered and brought in. It was groomed whether we were using Houdini or XGen, it was never going into Maya — we skipped it and went straight to V-Ray.

Any grooming tips for aspiring CG hairdressers?

If you want to be a hair groomer, just look at real things. I develop a fixation on real-world objects, whether it's buildings in San Andreas, dogs for Divergent, or raccoons for Rocket. Dig in deep and look at reference as you work and you'll see things that didn't in the beginning. Eventually, you get to something that is photo real.

Did you use any specific V-Ray features here?

The great thing about V-Ray for Maya was the ability to use proxies. We weren't working with a fur procedural; we were working with every single hair. We had bound the hair in Houdini, and then we were caching it back out because we didn't want to re-groom the character. As we were working with every single strand of hair, it would crash Maya as soon as we tried to load up just one section of his hair. Because it never really needed to be in Maya, we could bypass that by using proxies.

What kind of changes did you make between Guardians of the Galaxy Vol. 2 and Avengers: Infinity War?

Just minor changes. The asset was looking pretty good from Guardians of the Galaxy Vol. 2, but we changed some spec maps or spec responses on his hair, to add some glints. We added a percentage of hair that got more or less spec. We did some increase of some subsurface here and there of the nose, as well as improving displacements.

What did you think of Total Chaos?

Total Chaos was great. It was the first time I'd been to Bulgaria, and I would go back. Everyone was really friendly, and there were tons of great speakers.

Any particular highlights that you've seen?

One highlight was listening to a talk on machine learning, and how that can be used for render farm predictions, which is super techie and nerdy. It's not like the flashy Rocket Raccoon, but as a CG supervisor you're concerned with iteration time, and getting information to then bid on work or knowing how long's it going to take to render certain jobs. If we can use smarter technologies, rather than just doing the math, then I'm all for it.