Making Of: Super League

In November 2010 Xenon Studios, newly formed by Michael Powell after the closure of Ark VFX were contacted by Director Adam Wells to help pitch for the Title Sequence of Sky Sports’ Super League coverage.

The project was a perfect opportunity for the ex-Ark team to collaborate and in this in-depth breakdown we’ll take an overall look at some of the work that was involved.

Super League is the top Rugby League competition in Europe and the players have a reputation for skill and toughness, hence the nickname “Men Of Steel”.

The brief was to get the Men Of Steel concept across in a 30 second action sequence which involved players battling it out in a Science Fiction inspired setting. Authenticity also had to be a key part of the sequence. Nobody wanted the piece to be just a montage of shots, we wanted to produce a killer sequence that really tells the story of how skilled and sometimes brutal the game can be.

As well as being a hugely experienced Director, Adam is also a huge rugby league fan and worked closely with storyboard artists to come up with ideas for passes and moves that would then be motion captured and later constructed into sequences.

The concept was to portray the players as real life “Men Of Steel” wearing Sci-Fi full body armour suits. The team gathered to discuss Adam’s vision which was to be a fusion of Sci-Fi production design and real world rugby league tactics and plays. Concept artist Stephen Tappin began work on the suit and stadium designs.

Steve’s initial suit sketches.

Work in progress renders of the stadium. Designed and modelled by Stephen Tappin, shading and rendering by Richard Wright.

After a few collaborative discussions the team had a very good idea of what was involved. We quickly realised that the design of the suit would be critical to the success of the final piece. It obviously had to look very cool, on the other hand it couldn’t be so complex that it would become too cumbersome to build, rig and animate in the allotted 12 weeks timeframe.

Based on ideas from Adam and Steve’s initial suit design sketches, a “proof of concept” test piece was quickly worked up in Lightwave by modellers Tim Brown and Craig Clark, then passed onto to animation director Andy Turner to setup, animate and render. This initial head and shoulders test was completed in a few days with simple animation. It was intended partly as a target for the production design and also to use as part of the final pitch to Sky Sports who had never before done a completely CGI title sequence for a sport like Rugby League.

[youtube id=”Lk76gJAHjTQ” width=”600″ height=”350″]

This first proof of concept test piece modelled by Craig Clark (helmet) and Tim Brown (torso) was setup and rendered in Lightwave by Andy Turner. Note how only the torso was modelled and animated. This first test piece served as a basis for the design of the suit, shading design, and lighting.

Meanwhile Steve worked with Adam to produce a new more focused set of boards that could be used as a basis for a rough animatic and also for Andy to begin working out the flow and pace of the sequences. Tim began on the first draft of the full suit model and worked closely with technical director Richard Bentley. The building of the suit would need many hours of tweaks and alterations by both Tim and Rich to allow it to move in a way that would allow us to apply a varied and dynamic set of performances while still closely matching the original concepts.

[youtube id=”b2wFaflQR74″ width=”600″ height=”350″]

Stephen Tappins storyboards were edited to form the first pass animatic.

Once we were happy with the boards and the progress of the suit design, the motion capture session could begin.

Adam Directed the day long session at Pinewood based Centroid with 4 players from the Sussex Merlins Rugby League Club. The data was then processed and sent back to Rich Bentley for integration with his on-going rigging process.

The rigs were being setup in a way that allowed us to continually update both the model and rig itself so that Andy could work constructing scenes from the many mocap takes of the players while Tim and Rich Bentley could add updated models and rigs to the pipeline.

The excellent “HedgeTrimmer” software from Centroid was fantastic. Along with the list of the available takes, each take has 2 video streams and a realtime display of the block-out 3D model that can be watched back offline back in the studio. This not only made it easy to select takes for processing but also saved valuable layout time allowing Andy to import only the takes he required when he needed them.

Centroid’s excellent HedgeTrimmer software.

Andy used the board edit as a very rough guide and began the work of designing sequences that Adam required based on Steve’s boards and the motion captured data available.

From the start we’d all agreed that we wanted a punchy style to the piece for a couple of reasons. Aesthetically we wanted to keep the cameras moving as much as possible, adding energy and excitement, we also knew from experience how difficult the design of the suit would make some of the extreme character movements that were needed. Added camera movements would help make any potential problems less visible.

The problem was that the suit would never give the same range of movement that the player could in the mocap session. The bulkiness of the CGI suit would inevitably mean we’d have to seriously restrict movement at the joints in order for them not to intersect with each other and that wasn’t an option. We worked out various technical ways of reducing the problem including a cleverly rigged set of shoulders and spine, and a rig that had IK blending capabilities.

We were also very mindful from the beginning of placing cameras in the best positions for the action and also being aware of not showing “too much” that might require lots of extra clean-up work for no real benefit for the viewer. As it was the project was quite a significant task for a small team on a relatively small budget and tight deadline.

Suit Modelling

The first thing that Tim did when he started working on the suit was to create a rough silhouette based on the initial concept so that the form and proportions could be nailed early on. As nice as the concept was, it was a bit too “Superhero” like in it’s overall appearance, and that was something that needed to be tamed down a bit to make it appear more realistic. After all, this was not meant to be a robot or bionic suit, there were supposed to be real guys in there! A bulked up human model was used to represent a rugby player in a background layer, then the quick silhouette built over this guide. By doing this, we could be sure that key areas of the body and the limb proportions would be fairly close to a large human male, which would help to make things to look as realistic as possible. An added bonus is that mocap data works best when applied to CGI characters of a similar size/build/proportions.

Once the silhouette was done, Tim used the shapes to create a rough block-out version of the suit. This would act as a template for the forms and sizes of the hi-res suit parts to be based upon when they were due to be made. Before the building of any hi-res parts could happen though, the rough block-out was passed over to Richard Bentley so that he could construct a quick rig and test the suit for general mobility and highlight any interpenetration issues.

Modelling guides were used to block out the rough model.

It is beneficial to do a mobility test at this stage of a project, so that time isn’t wasted further down the line creating lovingly modelled hi-res parts which don’t actually mesh together properly. Of course, being only a rough low-poly block-out, it is very easy to pull points around, and tweak any sections that cut through each other badly, or simply don’t work/look so well together. At this point in time, there were a fair few alterations, while Rich did various tests using some generic mocap data we had, and Tim adjusted the rough block-out model accordingly. With the time and budget constraints that we had, it was never feasible for us to create the ideal suit where all the joints worked perfectly together when performing some of the extreme movements that we were expecting from the mocap shoot. However we were committed to delivering the best we possibly could, and had one or two tricks up our sleeves for any joint problems that we couldn’t deal with in the model/rig.

[youtube id=”j5zAARv-VVs” width=”600″ height=”350″]

Range of movement tests were carried out by Richard Bentley. These were important to help highlight any areas of intersection.

[youtube id=”lxpZuMzjH3Y” width=”600″ height=”350″]

Early multiple referenced block-out rig tests. These were done to ensure we could have the required number of fully mocapped characters referenced into the scenes without problems occurring in the render pipeline.

Around about this time, the client asked for a few slight revisions to the concept. They felt that the initial one was too “cluttered” and had too many small details, where-as they wanted a cleaner, more streamlined look in certain areas. So Steve revisited the concept, and once the client had agreed to the changes Tim also updated the block-out with the alterations. The changes weren’t drastic, but some key areas such as the back, shoulders, elbows and waist had changed, meaning that it was sensible to run the new block-out past Rich again to make sure the rig still worked satisfactorily with the revised shapes.

Left: Steve’s original suit design. Right: A more streamlined look.

Model progression.

Once the block-out had been approved, Tim set about modelling the hi-res version of the suit parts. Although we knew that the title sequence was going to be fast moving and action packed with no tight close ups of the suit, we were aware that Sky were planning on featuring some close up shots in the promo trailer, which was going to be broadcast in HD. This meant that all areas of the suit had to be modelled to hold up under close scrutiny. Tim decided to tackle each section of the suit independently, starting with the torso, then the shoulders and arms, working down through the pelvis and legs, and finishing with the feet, hands and the all-important helmet. With each section, the low poly block-out version was loaded up, and Tim would model the hi-res equivalent based around the shape of the low-poly version, sticking to the joint positions that had been agreed upon at the block-out/rig stage.

First of all, the shape and form of the given part would be finessed, ensuring that any edges had the same size bevels, and plates were of equal thickness so that the suit as a whole would look consistent throughout. Once the form of a section was done, at that point any “panel lines” or small details could be added. If there were any features such as bolt heads to add in, Tim would make sure that these were real world scale, all to aid with the ultimate goal of making it look as realistic as possible. As each section met the neighbouring section, Tim carefully ensured that a suitable joint was designed and modelled which would look feasible when animated and rendered. The entire suit model was built in Lightwave Modeller, using polygonal modelling methods, with a view to rendering with sub-D’s at render time.

[youtube id=”TwgViE1r4R4″ width=”600″ height=”350″]

It was decided fairly early on that the helmet featured in the concept was far too detailed and a much cleaner look was required. Adam preferred the look of the helmet that had been quickly made for the pitch. Unfortunately we couldn’t simply use that helmet, as it was also necessary that the “shutters” on the front of the visor had to be able to open and close, and the design of that helmet wouldn’t have allowed that to happen. The upshot of this was that Tim had to design and build a helmet that not only looked aesthetically pleasing, but also had a visor system that appeared to work, and was the right size and shape to have a rugby player head positioned inside it.

[youtube id=”_aRsMboYjPI” width=”600″ height=”350″]

Helmet with “shutter visor” test.

When all the “Hard Parts” of the suit were complete, some time was spent creating the flexible sections of the suit. This included a “body-sock” to give the suit a sense of completeness, plus all the exposed joint areas: waist, neck, ankles, inside elbow joints and behind the knee.

[youtube id=”J50TPtEZ72g” width=”600″ height=”350″]

Ambient occlusion “Spinny” of the completed model.

[youtube id=”WTQaIKM0goo” width=”600″ height=”350″]

Rig and Shading Test.

Richard Wright was the artist responsible for the lighting and rendering and had begun some initial testing of the rendering pipeline. Originally we had thought we’d use Lightwave for rendering. We’ve had a lot of experience in using Maya for animation, exporting to Lightwave for rendering using Beaver Project. We’d done a couple of very early tests of this pipeline but it soon became clear that there could be significant memory and issues associated with that much data being moved around. Since rigging and animation were to be done in Maya it would be better to use Renderman For Maya as the rendering solution.

Lightwave’s renderer is a very fast fully raytracing solution and would have been good for the reflective nature of the scenes So is Worley’s FPrime, but FPrime suffers from a lack of decent renderfarm support and the LW renderer can’t match the anti-aliasing that RMfM has.

[youtube id=”wSk1wtPwA2Q” width=”600″ height=”350″]

Initial Lightwave pipeline tests using very early stand-in models.

Animation

Once the scenes had been setup with all the players’ mocap on the rigs, Adam spent time with Andy going through the various iterations of the edit, making sure that all the moves and sequences made sense both from a storytelling point of view whilst also being authentic to the game. This edit still only had the main “hero” players in place with the raw, unedited capture on the rigs.

[youtube id=”ffePGNPWIiA” width=”600″ height=”350″]

Work In Progress edit.

Once they were both happy and the edit was locked down, Andy continued the process of tweaking cameras, adding the rest of the players and adding secondary animation and fixes to remove all the intersections that were an inevitable problem with such dynamic capture. A combination of hard work, Rich Bentley’s extremely flexible rig and Maya’s excellent animation layers made the job less painful than it would have been in the past.

[youtube id=”5kBJGDlPEkI” width=”600″ height=”350″]

The scenes were setup by importing the relevant rigged mocap take for each player. Sometimes all 4 players were captured in one take, making it easier to re-combine them in the scene. Though more often than not shots needed a combination of captures from different players and takes. Andy then had to select suitable performances for players and integrate them all in Maya. Mostly these were players chasing the main characters or background characters in the shots.

The main difficulty at this stage was the distance the players could run before they ran out of space in the capture studio. This distance was relatively short so dictated the length or camera angle of some shots. Andy had to try and construct what looked like long runs of play using short separate captures, camera angles, and cuts.

The animation rig contained both the high resolution version of the mesh and a very low resolution proxy version suitable for quick refresh rates while animating in Maya. Once Andy had the basis of each scene he would make a playblast in Maya and import that into the edit. Working this way ensures continuity and complex sequences can be created very quickly.

[youtube id=”JS6TWOYEjM8″ width=”600″ height=”350″]

These breakdowns show the animation editing process. Note how extra “weight” was added to the player as well as the fixing of the many intersections that were as a result of capturing the movement of a player then applying it to a bulky suit.

Because of the bulkiness of the suit compared to the real player, intersections were obvious. Some issues could be minimised by careful placement of cameras but most shots required extensive secondary animation. The way we achieved this was to develop a rig that had the mocap driving IK targets rather than joints as is usually the case. That meant Andy could then go into each problem area and animate over the top of the mocap using Maya’s Animation Layers, essentially adding extra movement without destroying the mocap underneath. Animation Layers were also used to augment certain shots for dramatic impact such as the head and neck of the player above and the final shot.

Unlit, motion blurred “playblast renders” were then made of each shot to check just how much detail the viewer would actually see. This was a way of improving animation efficiency. If it’s determined that when finally rendered detail will be lost, then there’s little point in spending time, effort and ultimately money adding detailed animation or fixing intersections. Only areas we knew were visible were worked on extensively.

Basic hand and finger motion was captured at the mocap shoot but only very coarse movements were possible from the data, Andy wanted much more lifelike animation. Rich Bentley had written a script that applied various poses to the hands that saved many hours of keying each individual joint. Andy used the pose generator and keyframed them as a basis for the animation of the fingers.

After positioning the characters, laying out the scenes, re-animating the ball motions and applying secondary animation, Andy used particles to simulate sparks flying off in heavy collisions and interaction with the ground. This was done by hand using emitters placed and keyframed at the points of impact. Turbulence, gravity and drag fields were then used to add interest to the motion as the sparks dissipate.

Rendering

Once the animation was finally complete the animation rigs that were referenced in all the scenes were swapped out for render rigs. These rigs were the same as the animation ones except they didn’t contain an animation proxy and had been setup and optimised for rendering by Richard Wright who had spent weeks designing the shaders and lighting for all the scenes in Renderman for Maya.

Richard then spent time taking the animation scenes and splitting them up into render elements. This was done to save memory and also to give the most flexibility at the compositing stage.

The scenes were rendered at 1920×1080 using Xenon Studios’ in house 16 machine Renderfarm and managed by Michael Powell. The various elements were rendered over the space of a few weeks, render times ranged from a few minutes for simple elements to 5 hours per frame on the most complex.

Once the various elements were rendered Richard then re-combined them in the compositing software, added lens flare effects and graded each shot. Those shots then went back into the edit and a final graded titles frameset was output.

VFX Artist Justin Bates then took the footage and created the “Super League XVI” Logo for the final shot.

Promo

As well as producing the title sequence Xenon Studios was also commissioned to produce material to promote the upcoming season. Animator Paul Clayton took assets created for the title sequence and worked with Adam and Michael to produce beauty shots of the suited players including a shot that included a 3D scanned head of Super League young player of the year, Sam Tomkins.

Ten24’s James Busbywas assigned the task of scanning, retopologising and texturing a photo-realistic head for one shot Andy animated.

The first step was to scan and photograph Sam at the Wigan Warriors training ground. James used an Artec M 15 FPS structured light 3D scanner to capture the Sam’s face from ear to ear. A series of photographs were also taken of Sam’s head which would later form the basis for the texture maps.

Raw scan data.

Reference photographs.

After aligning the scan data using the custom Artec software, the raw mesh was imported into Zbrush and cleaned up using a variety of tools. Then the data was retopologised by deforming a pre-existing UV’d mesh to roughly match the features of Sam’s face. The “Project All” function was then used to transfer the details from the scan data to the new mesh.

Cleaned up retopologised model in Zbrush.

Finally the textures were projected back onto the cleaned up mesh using Lightwave 3D. Bump, specular, gloss, epidermal and sub-dermal maps were extracted from the colour texture.