First Talk of Saturday was Aaron Holly, rigger at Disney. The talk was about deformable rigging in Maya. The idea is to enable animators, who often have a background in classical 2D animation, to achieve the exact pose they require including all kinds of squash and stretch.

The approach Aaron shared was to build simple skeletal structures that drive an intermediate deformation level whose controls can be torn off the skeletal structure thus achieving any exaggerated pose required.

Aaron, who studied philosophy before swerving into the 3D realm actually invoked Occam’s razor as being an underlying philosophy to his approach to 3D rigging. Occam’s razor states that, given two valid solutions to a problem, the simplest solution is invariably the best. Nice words to work by.

Another issue that Aaron talked about is linking stuff (secondary shape animation for example) to the rotation of a ball joint in a rig. A ball joint will usually have rotation on all three axes while you will typically want only two values to link your stuff to: a top/bottom rotation as well as a forward/backward. The approach showed was to project the end of the joint onto a plane at the root of the joint. The position on this 2D plane vis-a-vis the root of the joint thus represents in 2D space the amount of rotation applied to the bone. Interesting. I’m sure a custom operator for XSI can be written that would compute this in a flash. This to-do list of mine is getting way too long!

Mark Lefitz

Mark was showing a typical digital environment workflow. A lot of interesting stuff was shown, Mark’s production experience really shows.

Tips and tricks for easing the workflow such as working in half rez or 1K and then scaling up renders for final out.

All in all a neat talk. Mark has often worked in Maya and Renderman and mentioned some pass workflows that were interesting. XSI’s passes and partitions still seem to dwarf anything else I’ve seen though.

Having never worked in a games development house I never really realized how much the two industries of games and film have come together.

Chris of Lucasarts spoke a lot of convergence. We all know the Lucas empire is huge and as technologies have been moving to a common nodal point a lot of effort has been put into unifying pipelines and processes over at George’s place. Slick stuff.

We’ve all heard a bit about Zeno, ILM’s new development framework for custom tools. Well, it seems to be a much larger endeavor than I thought. The guys at ILM R&D seem to have almost come up with a full content creation system. Integrating game technologies to create film previs tools, moving editing tools from a film background to a cutscene blocking environment as well as providing a unified environment for the artists. It’s all proprietary so we can only fathom all the things Zeno can do.

Lastly, we were able to see a tech demo of euphoria technology jointly developed with UK-based Natural Motion who market a character simulation software called Endorphin. Seamless blending from animation to ragdoll to animation as well as simulated character motion reacting to environmental factors. Imagine a simulated character latching on to a second story railing after having been projected sky high by an explosion. Check out the Endorphin demo to get an idea of the technology.

Consumer technology is always pushed forward by a few R&D brainiacks going “I bet I can do this.” when everyone else is saying “Wouldn’t it be cool if…” or even “It can’t be done”. Looking at Stanford, UCLA, ILM, Disney, Sony Picture Imageworks and other large players as well as a host of smaller ones makes me say it’s a good time for digital content creation.

It was refreshing to see someone approach digital compositing from a more technical point of view. Most people, when they talk about compositing concentrate on the finished product and its artistic and aesthetic value but knowing what happens under the hood IMHO is a prerequisite to getting that extra edge.

Jeremy managed to explain some of the mathematics involved in very clear and simple terms. It was nice to have representatives of both the artistic comper (Mark Lefitz) as well as the technical comper (Jeremy) in the same conference.

Jeremy also covered the creation and management of passes/render layers in Maya and how these different outputs can then be recombined in Shake to produce final output. XSI’s implementation of this stuff rocks. The use of XSI in that last phrase wasn’t a typo.

… And Jeremy was the first speaker of the weekend I heard mention XSI. Is Maya really so pervasive and have so many people not yet discovered the strong points of Softimage’s product portfolio? You do know they’ve evolved quite a bit since Softimage|3D!

Emile is obviously a great animator with a lot of experience. His demo of the squash and stretch style of cartoon animation as applied to 3D characters was very enlightening. Emile often stressed that, as an animator, it is very important for him to be as close to the character as possible. For the riggers out there he is saying that the rigs you produce should keep the animator on the character and avoid as much clutter as possible. For the animators out there he is saying that you are as much an actor as an animator and you should live and breathe through your character while you are working. Anything in your 3D environment that could tare you away from the performance should be avoided. Even if Emile seems to be the strong silent type, or maybe he was just really nervous, he is obviously passionate about his work and it shows through his results.

If a lot of movie critics talked about digital makeup or flesh extensions for Davey Jones, ILM is working hard to debunk that myth. Davey Jones and his crew are 100% digital, eyeballs and all.

The talk was pretty much a usual show and tell and when ILM does a show and tell you can be sure a lot of you questions will be replied with either:- “It was proprietary software.”
or- “We did it with Zeno”
which is proprietary software. I guess you can do that when you have one third as many coders and TDs as artists.

One thing that really blew me away is a proprietary tool ILM developed for this show called iMocap (taking a page from Apple’s marketing handbook are we?). This thing could pick up actor performance from a set with the main film camera and two witness cameras. The kick is that the iMocap system doesn’t care about lighting conditions and can pick up performance from many actors at once in an almost unrestricted set space.

… Say what?

Yep they built an uber flawless mocap system. I would love to get to the guts of this thing and see how it works but I guess I’ll have to get my own gears turning and figure it out on my own.

Arnaud started with an overview of the production process over at PDI and he mentioned something called PDI pipeline. Yeah, don’t you love these large studios and all their proprietary tools.

Most of the talk was a show and tell about Shrek 2′s effects work and, refreshingly, a few details were given. One effect example that was dissected was the fireballs. Surprisingly few passes and rather simple solution of growing isosurfaces with fractal noise displacement and a normal-based color shader accounted for 90% of the effect. Neat.

All in all I had a great time because content was king at ADAPT 2006. For a first time conference it could have been otherwise but the organizers really did their homework.

I hope next year I’ll get to meet you all at the 2007 edition.
Cheers.

This entry was posted
on Sunday, September 24th, 2006 at 9:40 pm and is filed under Bablings and Ramblings.
You can follow any responses to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.

11 Responses to “ADAPT 2006”

That was a total perversion of Occam’s razor, though… People have used Occam’s razor to mean, the “simplest solution to a problem is usually the right one”. It”s not. Some problems have very complicated explainations and solutions. Occam’s razor is about eliminating explaination of a phenomenon that include elements that have no effect on the result. People have used Occam’s razor to say, the aliens built the pyramids, because to them it”s seems to be the simplest explaination to how they got built. However that explaination introduces a lot of pseudo-science that isn”t necessary to explain that, so it doesn”t pass Occam’s razor. Lots of people pushing big stones doesn”t require introducing any extratenous elements not related to the problem, so it”s more valid.

A solution to a problem that raises more questions than it answers is, dare I say, not really a solution. Those that would use Occam”s razor to back a solution totally based out of speculation aren”t thinking straight… But I understand your point.

Back to ADAPT, I agree the conference as a whole was amazing, for a first-time event it can”t get any better. That being said, I was slightly disapointed with the hands-on portions of the 3D track; fairly basic demonstrations of rigging, tracking, comping and layering. I did enjoy the introduction and bios” of the presenters, they all had great peronal stuff to show & tell. The more corporate talks from ILM, LucasArts and Dreamworks were recaps of past SIGGRAPHs, but that”s fine, it gives Montreal a chance to see them.

The 2D Track talks were excellent, much better suited for this kind of artist-driven conference. Iain McCaig”s presentation was perticularly impressive, the best presentation I have seen in 10 years.

I had a great weekend, and hats off to the ADAPT team. I have high hopes for the 2007 edition, this thing is on a roll!

The part of his shoulder rig that presents the problem to me seems to be extracting the U and V cooridinates of the null which is constrained to the nurbs surface. I can”t recall any native XSI tool to do so off the top of my head.

Sorry for spamming but now that I think of it withing XSI You can”t have 2 constrains acting on same property like he did with sufrace and positional constraint, so one would haveto figure out how to get the same effect with other tools.

I don”t know the exact setup you need to emulate the shoulder setup you are describing, but in XSI it is possible to create a surface/position style constraint. So if the following setup doesn”t do exactly what you want, some aspects of the thinking may provide a pathway to get there:

1) Get a Null to be your controller
2) Create a NURBS or poly surface to be your bounding surface
3) Create a very small linear curve where the center is at one of the vertex points and pose constrain it to your controller
4) Use a shrinkwrap operation to shrink the curve onto your surface
5) Get a second null to be your animation node, and constrain it to your curve.

With a setup like this, you basically get a null that bounds itself to a surface, but you can control its position with XYZ space.

Thanks for the tip, I can see how this can come in handy at some point. As to Aarons setup, I got the setup working fine but at the end of the day you still have 2 values that you use for your blend shapes. And you still run into the problem of blending those shapes from those 2 values in a situation like a shoulder, so basicaly it doesn”t offer much help for a realistic shoulder.