The Mill recently collaborated with Brothers & Sisters and award winning director Johnny Green out of Biscuit, to bring the explosive and visually epic new Sky ‘Q’ spot to life.

The arresting launch film in the campaign saw the large team of Mill VFX artists work for over twelve months, developing bespoke tools uniquely for the project enabling them to create the ‘fluid viewing’ liquid droplets that are seen exploding out of the screen, embarking on an epic journey around the home.

We caught up with Dan Williams, Creative Director and Lead 2D Artist and Francois Roisin, Lead 3D Artist and Joint Creative Director on Sky Q to learn just how they set about creating the liquid hero in this highly complex project.

Pre-vis and R&D were key to this project - why is this?

Dan: The R&D phase of the job was very important as it gave us the opportunity to explore different looks and behaviours for the liquid. It was highly subjective, as no-one had seen liquified TV’s before. How transparent are they? How viscous? Do they glow? Are they electronic or more like glass?So we looked at many different visual approaches to find the right balance of aesthetic qualities.

We started with traditional 2D concept drawings, but soon felt that we needed to begin simulations and render tests to get a real sense of how the liquid would move and react.

The pre-vis was an extremely useful tool. We worked very closely with Johnny, Aaron at Brothers & Sisters and Lasse the DOP to construct a previs that gave us the bare bones of the shoot in terms of framing and timing. We used it as a jumping off point for each camera setup and it gave us something much more tangible than a storyboard to discuss each shot with on set.

Francois: Previs being a prolongation of the storyboard and a tool for everybody to organise a shoot (director, creatives, DOP, camera crew) meant we had to provide rough ideas about actions and framings. The previs was therefore key to this project because it is fundamentally hard to organise a shoot around a hero action that does not actually exist. A classic post-heavy job would require a far more detailed and precise choreography, but we felt we wanted to keep exploring and discovering as we went along.

Even though it is a great way to craft scenes and action before placing a camera on set we did not want to be tied up to a strict choreography and be able to find beautiful moments and happy accidents on location.

What are the key steps in creating CG liquid? Is there any integral software?

Dan: We shot lots of live action references of droplets and made an edit of our favourite “real” moments.We then blocked out the action using the real liquid in Flame and also using rudimentary 3-D spheres in Maya.

Once we had a fairly good idea of the blocking/choreography of the liquid, we went into sim in Houdini. The sim process would often take a lot of iterations before we reached something we were happy with as it had to tick so many boxes. We were always looking for drama and natural moments in the simulation.The whole scene was always modelled and tracked to the live action backgrounds so the liquid would interact with surfaces correctly.

We would then project tv footage and other pixellated and moire textures on to the inner skin of the droplet to give them their own unique looks, whilst the outer skin had magnifying and reflective qualities.

Along with a lot of other bespoke passes such as hit maps that allowed us to give the droplets a kinetic quality that made them glow on impact, the renders were then passed to Flame and Nuke to be composited into their environments.

Francois: The first step was to rebuild all of our sets shot in camera in order to catch the light emissions, shadows, caustic effects from the droplets and sometimes re animate elements of the set that would interact with our hero fluids.

A more conventional VFX integration job would consist in making our CG sit naturally in the environment shot but in this case we had to alter our set and Live Action to be affected by our fluids.

Before firing any CG liquid in our virtual sets (rebuilt from live action), we did combine the plates of the sets with plates from actual liquids, shot at high speed and macro, to assemble an animatic made of real elements. This way we were able to refine our intentions in terms of framing, action, mass, set interaction, inspiration for the simulation team for every shot.

Crafting our CG fluids would require to be able to control and manipulate simulations with flexibility, this is why we chose Houdini. This software provides solid fluid simulation but also allows us to go under the hood and customise every step of the hydro dynamics system.

Talk us through the 2D - What was the importance of seamlessly integrating each CG droplet to the final film?

Dan: Giving the liquid a photographic reality was vital to the success of the films. We spent a long time working on just a few shots (a couple of close-ups, a mid and a wide) and once we’d finally nailed it, were able to take the look and learning from those shots and use them as benchmarks to match to for all the others. Consistency was our watchword, but each shot had it’s own unique challenges. It took a lot of crafting to get the seamless spots that you see now.

Throughout the spots we were always looking for interaction; whether it was light interaction when a droplet contacted with a surface, reflection on the objects around it or shadow play. All of these were dialed in and balanced in Flame and Nuke.

On set, we threw various different coloured bouncy balls through each scene and these proved to be very good references for matching depth of field, strengths of shadow and the tonality of the droplets themselves.

We also shot lock-offs soft and hard light moving through the scenes so we could then project lit-up areas to animate on when our droplets glowed.

Because of the macro nature of a lot of the shots, depth of field played a big part in bringing a correct reality to the droplets. As we had built and tracked the environments in 3d we could use generated depth mattes to play with the focus of the overall shot, guiding the eye to the moments of beauty within each sequence.

What new ways of working / bespoke software did The Mill create in order to make the CG liquid?

Francois: The simulation system was a keystone to this project and it took us a fair amount of time to design. Simulation had to answer two questions: How de we control and bring life to fluids to craft an elegant choreography? and how can we provide the rendering team the elements necessary to design a liquid TV screen?

We started by mimicking the real fluids we did shot in slow motion, macro and water repellent surface. Real fluids tended to lose momentum pretty quickly, we decided to add different temperatures within our CG fluids, so they would attract and repulse each other, only by a fraction but by doing this, the liquid seems to keep constant momentum while retaining a natural motion.

Once we were happy with the general behaviour of our fluid system, we did start actual simulation is scenes, treating it as a real fluid, do we want to use a pipette, serynge, cup, or even a bucket to throw our fluids in our set? how far? how fast?from where?...

This way the workflow for artist was still comprehensible and would not drown them in tons of different settings to achieve a good result, at least to get a good INITIAL result!...We still did have to customise a lot of parameters per scene to achieve the desired effect (viscosity, gravity, field turbulence...). By keeping this approach fairly realistic, we still did expect and counted on happy accidents, while shooting real fluids how many times we were surprised by an unexpected behaviour of the liquid and we wanted that for our CG.

In terms of look, the way the fluid was built had a great influence too. We developed a technique to be able to project the screen content on moving fluid. Liquid by nature is in constant movement and therefore it is really hard to keep a good read of an image (just like when you mix up to different colours of paint they eventually end up blending with each other). So we did set up "dynamic" UVs, where the render artist could ask the simulation artist to project his texture on a keyframe or several keyframes, and from then the simulation would carry over and blend the projections together to give the illusion of a consistent image.

We also broke down the fluid in two different membranes, the inner membrane would carry the TV screen content (content + pixels + moire effect) and the outer membrane magnifies the image and creates a chromatic abberation on the pixels / moire effect.