This tutorial explains the production techniques used in the creation of Lucas Licata's The Game. As its title The Game suggests, the animation explores the world of video games. This brought about the challenge of replicating a video game aesthetic in traditional broadcast animation. The tutorial will demonstrate how different techniques and tools can be used to create a short 3D game style animation at a low cost. The outline of the tutorial is as follows:

An explanation of some of the fundamentals relevant to the technical elements of 3D animation; and

An explanation of the production techniques used in the development of The Game.

TECHNICAL ELEMENTS OF 3DThere are a variety of important technical elements that were required to simulate the 3D aesthetic in developing The Game. The following technical elements will be discussed in the tutorial:

The difference between low-count and high high-count polygon modelling;

How and why UV texture maps are used in video games;

What is meant by the term rendering; and

The difference between real time rendering and broadcast animations.

PRODUCTION TECHNIQUESThe following process was used in developing The Game:

Script and dialogue development through storyboarding and sketching;

Visualising the script through refining the storyboard and animatics;

3D production through modelling and animation; and

Completing the animation through rendering and compositing

Figure 1Example of high polygon and low polygon count modelling

TECHNICAL ELEMENTS OF 3DThere are a number of 3D animation techniques that will be introduced to you including real time rendering, low polygon modelling and UV texturing that help to create the aesthetic of a video game. It is also important to highlight why the approach in developing 3D animation in video games is different to other types of 3D animation like you find in movies.

THE DIFFERENCE BETWEEN LOW LOW-COUNT AND HIGH HIGH-COUNT POLYGON MODELLING3D models are made up of individual surfaces that are called polygons (Figure 1). The polygons are the triangles that are out lined in white. They are created by plotting points on an x,y and z axis. With low low-polygon modelling, the number of polygons are kept to a minimum. This allows for the computer to rapidly display the image because it has less polygons to calculate and is called real-time rendering;.

WHAT IS MEANT BY THE TERM RENDERINGRendering is the process of the computer calculating and assembling the image from a 3D scene. When you see a 3D model on your computer screen, you are seeing the rendered version of the model. The model itself is only a set of coordinates that define their shape and position. So every time that you view a 3D object you are viewing a rendered image of the model.

THE DIFFERENCE BETWEEN REAL-TIME RENDERING
AND BROADCAST ANIMATIONSOur televisions show us twenty-five pictures a second, which gives us the illusion that we are watching real life. These pictures are called frames. In a video game, the computer creates the frames by calculating (rendering) how each polygon will look as it comes into view. A low polygon-count means there are less polygons to render. Less polygons means that the computer can render at a higher frame rate. So the lower the polygon the less calculating the computer needs to do and the more frames the computer can render. This allows us to interact with the animation in real time.

In video games the rendering and the interaction with the player is simultaneous. This means that the computer renders the frames at over twenty-five frames a second, giving us the same illusion as our televisions. This is referred to as real time rendering.
High polygon count models are used in broadcast animation because they allow for more detail. Broad-cast animation is linear. Linear is a term used to describe, in this case, an animation the plays the same each time you view it. Non-linear is an animation that can change and go in different directions like video games.

Films like Disney's/Pixar's Finding Nemo are linear animations and can be rendered out over time as individual frames and then brought together to create the animation at a later date. The bringing together of individual rendered frames is called compositing.
Some times frames can take several hours to be rendered in broadcasted animations like Finding Nemo, but the benefits are that you end up with an animation full of incredible detail.

Figure 2Example of non non-textured and UV UV-textured models

HOW AND WHY UV TEXTURE MAPS ARE USED IN VIDEO GAMESTexture maps are a way of giving more information to the model without adding more polygons. The texture is a image or surface that is applied on top of the polygons. The map refers to the way in which the image is positioned relative to the polygons.

The 3D models found in games utilize UV texture maps. A model such as the non non-textured model of a head in Figure 2 is flattened out to become the texture map. A flat image (in this case a picture of a face) is then applied to the texture map to create the textured model. Details such as eyes can therefore be added to the model without increasing the polygon count (Figure 2).

PRODUCTION TECHNIQUESThe production techniques used in developing The Game are common to more than just 3D animation. Techniques such as developing good scripts and dialogue are considered important to many creative pursuits such as film, plays and TV shows. Here techniques including the development of script and dialogue, modelling, animation and compositing are all presented as key parts of the process of developing The Game.

DEVELOPING THE SCRIPT AND DIALOGUEOnce the idea of The Game was conceived, and a variety of characters and scenarios were sketched. The main character for the animation, the Surgeon, was created this way.

Character dialogue was developed using a video camera, which allowed me to improvise different personas and styles of delivery for the Surgeon character. A draft script for the Surgeon was created by putting together the elements from these recordings that I was most happy with.
Using my voice from the video, as well as the initial concept drawings, a rough storyboard and animatic were developed. An animatic is an animated storyboard. This helped me to identify scenes and characters. The benefits of using rough drawings meant that changes didn't take long and different ideas could be explored.

Using the rough storyboard and a video camera, I recorded myself acting out the script. This enabled me to develop the timing of the animation.

Figure 3Refined storyboard that illustrates the first scene from The Game

VISUALISING THE SCRIPTA more finished storyboard was then developed through drawings on paper. This enabled different possible shots to be explored and helped to further refine the environments and characters. Figure 3, which is an image from this refined storyboard illustrates the first scene of The Game. From the storyboard, a new animatic was created, complete with recorded dialogue.

Figure 43D environment and character models

3D PRODUCTION MODELLING AND ANIMATIONIt is at this point I started modelling. I used low polygon modelling and UV texturing, allowing me to use photos and drawings to show details. Figure 4 provides two examples of 3D models from The Game. The first is of the hospital hallway and is termed a '3D environment' and the second is of the Surgeon and is termed a 'character model'. These models were developed to mimic the 3D look of video games. One advantage of this approach was that it kept the time required for rendering low.

Figure 5Rendered frame from animation blocking for The Game

After the models had been created it was blocking time. Blocking is the process of arranging the models to examine the movement before developing the detailed animations - like a walk cycle. Figure 5 is a rendered frame of the blocking process used for developing The Game. The 3D model of the Surgeon has hands outstretched, still like a doll whilst being moved through his environment.

To develop the animation in detail and make the Surgeon appear to move naturally, I used a video camera to record myself acting out the Surgeon's part. Using the video I could see the desired movement for the walk cycle. It also allowed me to develop the timing. I knew where and how the Surgeon's limbs should move. The video was then placed into the 3D program behind my models, giving me a template to work from.

Figure 6Test render of frame from The Game

COMPLETING THE ANIMATIONThe low polygon modelling really paid off, at the test rendering stage. Due to the low polygon count I was able to rapidly render out a close representation of the final animation (Figure 6).

After looking at the test renders I was able to see that the animation wasn't working as well as I intended. It wasn't snappy enough and the dialogue still needed work. After a number of attempts to improve the script myself, Declan De Barra, who is the voice of the Surgeon, assisted in editing the script and really brought it to life.

Using the test renders and the new dialogue, I was able to edit a version that I could test on people to gain feed back. After making changes, based on audience feedback, I was able to settle on the final animation. This version was then given to Nick Godkin, a sound designer, who wrote the introduction and credits music and also created the sound track.

Figure 7Final render of frame from The Game

Now all that was left to do, was to render out the animation and to composite the sound. I rendered out each shot separately as uncompressed images, twenty-five for every second. Then I turned the images into small movies, and brought them all together in Adobe After Effects. I then created the title credits and end credits and hey presto the animation was done (Figure 7).

SUMMING IT UPThis tutorial has introduced you to a range of elements of 3D animation including real time rendering, low polygon modelling and UV texturing. These elements were referred to because of their use in creating The Game to give it the aesthetic of video game. The tutorial has discussed the production techniques involved in developing the script and how that script was visualised. The production of The Game shows an example of the steps involved in creating a short animation at minor cost.

The Game took three months to produce from start to finish. The techniques that have been presented in this tutorial allowed for the relatively quick development of the almost two minute animation, and as they say 'Time is money'.

In hindsight I would bring in the acting talent much earlier. Once the actors voice was down I started to see how the timing was working or not working. The video camera was a great help in providing a process for a rapid development of both the script and the animation. The low polygon modelling allowed for quick changes and shortened the final rendering time. The UV textures and sound were the key in the communication of my ideas and to give the story its detail.