The secrets of Playstation's mind-blowing ad campaign

The Mill’s Joji Tsuruga reveals all about the studio’s Greatness Awaits advert for PlayStation.

Shares

From the moment we heard about this job, we knew it was going to be epic. The world of Playstation is not new to The Mill as we have done several of their broadcast commercials in the past, including Michael from the Long Live Play campaign. Following the huge success of that, Greatness Awaits was set to launch during E3 2013 and we knew we would have to raise the bar even higher.

To tackle this spot, we collaborated with director Rupert Sanders and agency BBH NY. The Mill team was led by 2D lead Iwan Zwarts and 3D leads Rob Petrie and Joji Tsuruga. The massive core team of artists included Wyatt Savarese who was not only team leader, but also acted as our in-house gaming expert and consultant.

Things we did right

01. Preparing for greatness

We had four weeks of preparation time leading up to the live-action shoot. During this phase we created a layout of the scene, animated a pre-viz for timings, began R&D for FX, and prepared most of the CG assets.

With more than 20 games referenced in this commercial, we spent a lot of time gathering elements from all of the developers. Not only does every game company have their own software package of choice, but differences in naming conventions and the use of custom tools meant each element had a unique structure.

Rigging was especially time-consuming since we wanted every rig to function as they should without the use of custom plug-ins. Once it was all consolidated to work in Maya, everything related to shaders and lighting was converted so it could be rendered in Arnold.

"Although this was a CG-heavy job, we shot as much as possible practically," says Joji

02. Asset management

The Mill uses a proprietary asset-driven pipeline tool, written and supported by James Studdart and our research and development team. In this system, an asset is any production data that can be published, shared, and versioned. This can be anything from models, animation and shaders to entire scenes. Each asset also keeps track of extra information such as the author, users, and time of creation, which helps with organisation.

One of our pipeline’s biggest strengths is that it is built for parallel workflow. With the volume of assets on this job, it was necessary to have several artists work on a single element simultaneously. This can often prove to be quite difficult to coordinate. However, we have the ability to take each element and have it go through rigging, look development, and animation all at the same time. Troubleshooting is also easier since more eyes are on the same assets and problems are found early.

03. Light detection and radar

We were first introduced to LIDAR at a Pre-AICP creative showcase that was being held at The Mill in New York, and we were thoroughly impressed. We knew this job would be an opportunity to make great use of this technology and we contacted Travis Reinke at SCANable for their services.

LIDAR, or Light Detection and Radar, uses lasers to scan and survey objects or environments. By analysing the reflecting light, it can detect precise measurements between surfaces. The combined data that is gathered from the scans results in a point cloud that accurately maps the subject both in 3D and in colour.

Using multiple LIDAR scans, the team could even get data around the corner of a street

At first, the raw data was so heavy that most of our software packages could not handle it without crashing. A Python script was written to reduce the amount by a factor of 10, 100, and 1,000; it also allowed us to output the data into different formats to be used with any package.

In Maya, we were able to load the data into an nParticle system. This was extremely helpful - not only for the track, but also to create a very accurate layout of the set. We were able to see the placement and orientation of every object from the shoot, even down to the palm tree leaves. From there, we put the point cloud through further reductions and selectively exported nulls to be used in Nuke for compositing.

04. Proxies

This job would have been impossible to finish on time without the use of proxies. Working with dense geometry, animated caches, and large particle systems will slow down even our fastest workstations, but with the use of proxies, it becomes significantly easier to manage and also increases efficiency.

In Maya, we have a powerful proprietary proxy system, created by Craig Davies of the research and development team. These proxies can be viewed in varying levels of detail: bounding boxes, point clouds, and full geometry. It also has a robust interface where you can easily adjust shader assignments.

The heavy data is loaded during render time but stays as light as a locator within the scene file. Since there were hundreds of assets in our scene files, this system was essential.

Left: Original model of the pirate ship with dense geometry. Centre: Proxy model viewed as a point cloud. Right: Proxy model viewed as bounding boxes

Lessons learned

Considering the scale of this job, we are very fortunate to say that nothing went wrong. However, one big factor brought on a slew of difficulties: having a single camera move with no cuts.

One of the major challenges was the shoot itself; to choreograph more than 80 extras all at once while doing one 30-second camera move was a huge challenge for production. Out of a 12-hour shoot day, we managed to get seven takes, which seems like very little - but the reset time was a big reset.

To choreograph more than 80 extras all at once while doing one 30-second camera move was a huge challenge

The start of the shot was probably the hardest as we had to stitch two plates together in post, which meant the actor and camera height, tilt and speed all had to work in relation with what was shot for the street scene a few days before. Because of this it took the actor a few hours to match his exact moves from the previous shoot.

Having a single camera move with no cuts presented The Mill team with a number of challenges

It was also an arduous task to distribute the workload on a single shot. During the pre-viz phase, we planned to break the scene down into distance-based sections: foreground, mid-ground and background. However, once we were in production, there were constant adjustments in layout that made it difficult to separate solely based on depth. We realised we’d need a different approach, and the animation and lighting scenes would each need a different setup.

Animation was separated based on clusters of interaction surrounding a few key characters. Since much of the action stayed on screen for a long time, we also needed to consider idle animations or non-key actions. For the lighting scenes, there was so much light and shadow interactions between sections that we decided to keep it all in one scene. Although the use of proxies made this possible, any change in animation resulted in re-rendering long sequences.

Animation was separated based on clusters of interaction surrounding a few key characters

Greatness Awaits was truly a dream job for a gamer. We got to work with several assets from games we love to play, and it also gave us an insight into how CG assets are handled in the gaming world.

Planning ahead and giving a proper amount of time for preparation undoubtedly made a huge difference on this job. We were prepared for everything on the shoot, and were ready to start on CG from the moment we were given plates. The use of custom tools and modern technology allowed us to work extremely efficiently. It was an ambitious project, but we had an incredibly solid team that kept the process smooth straight to the end.