After a long departure, the team is finally back onto Machine and finishing up the film. Above are some brand new images from production. These haven’t gone through color-grade and are still in progress, but it’s been very exciting to see new shots rolling through the pipeline. We’re on a tight production for the next month, but if all goes well, we’ll be looking at finishing in May, and starting our research and application process for festivals.

This is a quick timelapse filmed on our own loco motion control system. Here we see (mostly) Charlene Barre setting up props and buildings for one of our city sets. Very rough car animation by me – Sunit Parekh-Gaihede.

Eventually, we’d like to show the full evolution of a shot (from set dress to final grade), but we decided to put up this test sooner than later.

This is another grade test – which is now a couple of months old, but I thought would be interesting to put up. There are some cloth penetration issues here which we’ve now solved using a combination of a custom shader (which traces a distribution of rays, out along a normal from the clothes, and displaces the penetrating geo back inside of the clothes), and some manual fixes in the lighting scenes.

We’ve also been testing the recently open-sourced stretchMesh as a way to solve the penetration issues, which we’ve had some success with and are considering integrating into the main character rigs.

The eyes in this test are based on raytraced reflections, using specific reflection objects. Since this test, our methods have shifted to instead use our point cloud as the reflection environment (this makes things a bit more predictable).

Here is a rough comp test of one of our sequences. The animation (still in progress) is by sean ermey, one of our recent team additions (I hope to have a post on our crew in the near future). The lighting comes mostly from the digitized version of our miniature sets and is entirely the work of our “psuedo” global illumination framework for 3delight, which we call hydraLight. The framework makes easy a color managed workflow, and multi-bounce lighting, as well as seamlessly blending strategies (raytrace, point cloud) for reflection/refractive. More about the framework in another post.

We also used this sequence as our test of the newly (re)branded Autodesk PhotoFly, now called 123D Catch. 14 photos from our miniature sets (as many angles as we could get, including our shot camera), and about 3min later a fantastic 3d mesh (which we use for character interaction and color bleed). I’ve attached some of the photos below.

For those interested, we’re using a 300W Arri Fresnel (with a few layers of diffusion) for the hall light (which is perhaps a bit too sharp), a bounced 1000W Arri, with closed gates, for the moon light, and a dollhouse practical for both the interior lamp and the outdoor lamp (you can see the shadows on the curtains). I might bring in a desk lamp instead for the hallway light if we reshoot this sequence.

Maya Animator

Mid-senior level
3-4+ years production experience with Maya (although we will also take talented applicants who don’t match this criteria)

4-6 months employment

Skillset:

Good sense of performance, timing, weight and posing

Comfortable working with a small crew (this is not a large studio situation)

Can take responsibility for your own work and abilities

Able to work independently

Comfortable working with character, camera and vehicular animation

Experience with lip-sync

Your job will consist of adding subtlety and finish to an already well-developed animatic. The project is performance and dialogue oriented, and based around a live action shoot (4 simultaneous reference cameras). The style is inspired by naturalistic motion (not realism) and it will be your role to bring that style to the character’s performance.

Additionally, there will be some animation that is not character-based, but more general (vehicles, camera, etc), as well as some crowd animation.

We will be assessing applicants on the basis of their character experience and reels.

Your job will consist of working with our previs lead to help set the blocking, tone, pacing, and direction of storytelling. The task will be to translate a live action reference cut, script, storyboards, concepts, into a working blueprint for the rest of production. This is a challenging but rewarding role, with the possibility of a substantial creative contribution.

We will be assessing applicants on the basis both technical know-how and storytelling skill.

So, this is the end of a longhiatus marked by very few posts. We’ve just entered into full production, and I’ll be keeping this blog updated. I’ll also enjoin other team members to contribute on their own work through this year.

The above video is part of our recent development work, and took place at a studio, and their neighbor, in Copenhagen, Denmark. The video describes some of our process working with actors, miniatures and cg characters. As we get a bit further into production, and have worked a bit more with our actors, I’ll go into some more of the details of the process.

So, this blog has been pretty quiet for a couple of months. Hydralab has been busy with a number of other projects (which we hope to put up soon), and also with developing our pipeline tools.
At the same time, for the last 6 months, we’ve been working with the Danish Film Institute, through their New Danish Screen initiative, and spending some more time with the script. The focus of the story has sharpened considerably, and we’re now entering our second development phase, where I hope to test out some of my ideas.
We’ve also stepped back to rethink the production process. I’ll post more on this later, but our new process puts a considerable emphasis on a robust previs, with a skeleton crew, before any production begins.

Above is a recent motion control clip of one of our city sets (shot on our loco rig). There’s a bit of compositing on the movie above, and we have a test running with 15 CG characters in the scene to see how much animation we need to put in for the scene to read believably. If we get around to lighting that scene before production starts, I’ll put it up here.

Below are some more of the reference plates from the shoot, as well as shots from other city sets in progress. The stand-in figures are built by the talented Hanna Habermann who will be joining the project again in January for the rest of the shoot.

We’re now recruiting for the positions listed above. These are on-site positions, although I’ll also consider remote work, if the experience is at a high level.

In addition to the above, we’re trying to gather interns for the layout and scene setup work. Potentially, there is also some shot work, and on-set assisting, depending on your skill set, or how the work goes during production.

The work will happen at our studios, which are located within the Animation Workshop in Viborg, Denmark. The school has a vibrant community of students and professionals, and attracts talented people from around the world. It’s an inspiring creative environment.

The production runs from February to August, although we’ll be looking for interns to start as early as January. The other positions run from two to three months.

These are some of the images of one of our 1:24 scale city streets. There are also some images of props, courtesy of Moddler, that we’ll be receiving in the near future and incorporating into the sets. These sets are, as most things on the project, in process, and we’re currently adding more text (signs, placards) and color to bring more life to the environments.

One of the problems we’ve discovered is that one can never have enough studio space – the shooting space has turned into a labyrinth of metal rails (for the motion control), computers, sets, lights, flags, bounce cards, and our monolithic fill dome (which we’ll use to try to hit the roughly 7:1 key/fill ratio we want for our outdoor environments).

Below are some more city shots, and some of the bedroom (which we’ve started shooting), and of the studio. There’s also a forced perspective shot – from the kitchen to the street, where we set a 1:24 street outside the 1:12 kitchen.

So, we’ve more or less finished the first version of our file browser – a custom Qt/PyQt interface written for the project primarily by Mikkel Jans. Above is the version view component of the browser. The browser is the start to managing our pipeline data processes – publishing, versioning (I’ll get into that below), launching files, and connecting (at the moment through socket interfaces) with our pipeline applications.

Versioning:

We’ve chosen to use Mercurial, a distributed version control system, for our file versioning. A user might typically save, say, a Maya file dozens of times during a day, but she’ll commit a new revision only a handful of those dozen times. Each commit requires a comment which describes pertinent information about that commit, and the user is able to revert to any commit in the history, or restore a commit as a separate file to a local folder. Branching is also possible, which means that it’s relatively simple to ‘try out’ ideas in your work files. In a typical programming environment, you might then merge that branch with your main branch, but it’s a bit more difficult with Maya or Shake files to merge through a text editor (so we’re not addressing that aspect yet).

We’ve also written a standard view into our browser which displays the revision history on a selected file, with a visual graph that displays the DAG (directed acyclic graph) of the file history.

What this means is that we don’t add user names or version increments in our naming convention – each file’s history is saved in a mercurial repository, along with comments, usernames, and other tags that we include in the changesets. Since mercurial saves only file differences, and compresses that file history into a binary format, the entire mercurial repository for a file, including dozens of commits, is usually considerably less than the size of the actual file (at least for most of our work files which are saved in ascii formats).

Mercurial forms one of the components of our pipeline. It allows us to easily roll-back assets, keep track of asset relationships (dev -> publish), monitor user activity on work files, and have an overview of the different iterations a file goes through. It’s relatively lightweight, cross-platform, and integrates well with our primarily Python based setup.

So, part of our recent work has been developing trees for the film. Below, you can also see some of the grass tests, using fake fur, and a base of different colors. The “trees” are actually bushes donated by our local cemetary (by a very friendly and helpful group of caretakers), which were on their way out to the trash. We’ve stripped them, reshaped them, and added bulk with various spice leaves (ground parsley, coriander, and other green/yellow dried spices). Bente (who you’ll notice standing in the pictures) has systematically developed a process for detailing the many trees that we see in the film.

Also pictured are Charlène Barré (responsible for a number of the props), and Sian Puckett (our new Spanish intern).

These are some of the recent miniatures props we’ve been building in our in house workshop, led by Bente Laurenz Jacobsen, and with Charlène Barré, Karen Rohde Johansson and Israel Hernandez. These stills are just progress shots taken during our dailies – which means there’s significant noise and shallow depth of field in most of the shots.

So, John Vegher, founder of Moddler (among other things), has generously offered to rapid prototype the props for our 1:24 scale outdoor sets. Above is our first prop, a bench which we modeled in Maya, detailed in ZBrush and then sent on to Moddler. Below is a turntable of the ZBrush model.

The idea is to ship the props over here and paint them before they integrate into the sets. We have about a month left before the outdoor shoot, so we’ll be spending some time putting all of the parts together.

Clearly from the pictures, the results are fantastic. This process saves us a tremendous amount of time building the props at the small scale, and also, having to re-build versions of the props in 3D in order to match to. More photos below.

At the top is a version of the test with some color/texture, and simple shaders. I’m also posting a “Making-of” so people can follow some of the integration process. These are both roughs – there are comp, animation, and render errors, but are nonetheless interesting for us.

While I think the test got the crew used to the general pipeline/workflow, aesthestically we’re still a bit off. At the moment, this hits closer to something from Monster House, or Polar Express. I’d like to move more towards stop-motion, and we hope to get in some animation studies over the next few weeks, spending time with some shots from the fantastic Madame Tutli-Putli.

At the moment, the textures are mostly without detail (both in the diffuse and specular), so we’ll be working to increase some of the sophistication. We might also spend some time with the shaders, although I’m not yet convinced we need anything more than a blinn, some fresnel, and hi-detail textures.

We’ve now got our own motion control rig set up. It’s based off of the last rig, which we shipped down to Studio SOI, who are using it for some exciting projects. The test above is from a demonstration we gave to the Danish Film Institute earlier today. There is a bit of subtle shaking in the shot, which comes from us handling the rig while the move was in progress.

Below are some recent tests of the trees Bente Laurenz Jacobsen (who is pictured below) is building for the project. We discovered that ambient daylight is difficult to recreate indoors, so we built a large rig (like a flash umbrella), that we will be stretching cloth over and hanging above the sets for the day shots. The shots below represent both the indoor light tests, and some outdoor shots. The environment around the workshop provides an interesting backdrop to the shots.

There are also some shots with Nancy Munford and Karin Ørum who came up for a weekend in February to finish work on the street sets.

The stairs on the house were some of the last items that got painted – hence they’re still foam in the pictures.

A lot of things have happened in the last month – one of which is that we’ve been sponsored by DNAsoft, developers of the renderman based 3Delight renderer. The character in the rough test above is rendered with 3Delight, which we – myself, Aske Dørge, and Nicolai Slothuus – spent about a week and some working with. I’ve included some images below on the various stages of the process. We took extensive set measurements to determine the camera position (although I think we’ll be trying out some image based modeling methods for the next test), shot chrome spheres, matte balls, and foreground bluescreen elements. As always, there’s a fair bit of compositing in Shake as well as some sound mixing in Final Cut Pro. Most of the sounds in this clip are downloaded from the great online resource freesound.org.

For the renders, we used 3Delight’s point cloud rendering methods – which meant that at small HD resolution, we could output our character with motion blur, displacements, depth-of-field, and occlusion (along with a range of other secondary passes – or arbitrary output variables) at under 1 minute a frame. Our next test is to try and come up with a global illumination process, using our set survey data, and light emitting surfaces baked into a point cloud, and rendered using some custom shaders. One of the great features with renderman based renderers is the simple shading language (RSL) which, in 3Delight, we can access through the Maya interface. This means we can test and write custom shaders in the interface, before converting them to standard .sl files, which we then compile through 3Delight’s shader compile utility.

For the animation pipeline, we decided to rely on Maya’s geometry cache features, which allow us to isolate the animation and lighting pipelines from each other. This means that the lighting scene references only the models (no rigs) and the layout, and the geometry cache imports all of the animation information. As the animation updates, so do the lighting scenes.

We also implemented some custom spotlights, with falloff regions, and on-screen visualisation. For this test, since we used spotlights to mimic all of our direct and indirect illumination, the falloff regions gave us more granular control over attenuation. At some point, I may look into a linear workflow, at which point Maya’s standard decay types might be more useful (or not).