this is your rig on sugarcubes

Posts Tagged 3d

I was working on a simple mesh in Unity in Blender 2.65a and I could not for the life of me remember how to show normals (to find some flipped normals that were causing faces to show improperly). Some spots say that the normals are displayable from the N-Key panel in the 3D view, but I thought a more concrete bit of information in order.

First, the normals are only displayed in edit mode. While in edit mode, if you mouse over the 3D viewport and then press the N-Key, you should see buttons appear in the N-Key panel under the Mesh Display category. Pressing one of those buttons will either show vertex normals or face normals, respectively.

Nice thing about Unity is how it updates; I love that saving over an old FBX file will replace every instance of the mesh in scene prefabs, provided you haven’t done anything wacky. One save and all the normals get set to where they need to be. Not to mention, the Unity 4.0 interface upgrades are a great addition!

The second half of the work I did was to automate the process of moving data around between Motion Builder and Maya, and to make tools for the animators that lightened their loads when it came to exporting and retargeting animations. I was also responsible for a batch conversion pipeline.

On the project there were animations in Motion Builder FBX files that were intended for reuse, which meant loading and retargeting multiple animations across multiple characters. This is a case (if only minimal edits are required) where the HIK system in Maya can be used to blast through conversions and free up animators for other work. Also, as many of the original animations were in single FBX files as Takes and the new decision was to have one file per animation (allowing multiple animators to work on the same character without file collisions in the version repository), the multi-take Motion Builder FBX files would need to be split in batch.

The Maya Human IK system as of Maya 2012 is usable and has a lot of good benefits. Much of what they say works out of the box really does, provided you stick within the system: the ability to retarget animations between characters of differing sizes and joint placements works very well, and the fidelity of those animations stays high. If you rename or reorient bones but still have a Characterization in place for both characters, the transfer will still work out. However, there were also significant downsides:

The Motion Builder to Maya one-click scene send did not work as expected 100% of the time. When transferring from Motion Builder to Maya, I often found that while the Characterization would transfer over the rig itself would not be properly connected, and many times did not even come in at the right position in the Maya scene. Baking the keys down to the skeleton, transferring, and regenerating the rig on the Maya side does work. You lose the editability of having less keys, but you get a one-to-one transfer between the two programs this way and the Characterization makes rebuilding and rebaking the keys to the rig a one-click process.

On the Maya side you lose a lot of the features you’d expect. For example, the animators complained about not having foot roll controls. Regular Maya constraints don’t behave the same way you’d expect, and adding onto an HIK rig can be trickier than building on top of a regular Maya rig. The strangest thing was that you can’t zero controls. If you want to return to the “stance” pose, you have to change the input to the skeleton, then key the rig at the frame you want to have zero’d, and finally go back to having the rig control the skeleton. Editing curves on the HIK rig can be frustrating, as both the FK and IK objects are used to control the final position of joints and the different Human IK modes for posing and interaction pull at the body parts in different ways; often animators were baffled about which controls were causing jitters or other issues, and it was usually a control for a body part much higher up the hierarchy. Lastly, the HIK controls and skeleton don’t have the same orientations as the bones they’re based upon. If you’ve set up your skeleton in Maya properly with behaviour-mirrored arms and legs, you’ll find that you have to pose HIK-rigged characters’ limbs separately anyway. (I only had time to look into these issues for a moment, as I had a lot on my plate; if there are easy solutions that were overlooked I’d love to know what they are.)

I had a look at the system and the commands it used when I walked through the Characterization and retargeting processes, and determined at the time that Python was not the way to go for the retargeting pipeline itself. I found in tests that it was more work to get the MEL functions behind the HIK system working from Python than it was to write MEL wrapper functions and call out to them from Python when necessary. It was also more efficient to use the MEL functions as they were, as opposed to pulling them apart to find the necessary node connections to set up the system on my own.

There’re a few lists of functions available online already (as I discovered on [insert blog link]). Here’re the ones I ended up using.

HIKCharacterizationTool, HIKCharacterControlsTool — These bring up their respective windows / docked panels. I found that not having the relevant window open made the functions that depended on the window being open fail, so keep that in mind when running HIK commands in batch.

getCurrentCharacter() — Returns the name of the current character as a string.

hikBakeToSkeleton — Bakes the keys from the rig or from another character being used as a retargeting source to the skeleton. I used this function when exporting from Maya to the game engine.

characterizationControlRigDelete() — Completely removes the control rig from the scene and sets the source for the character back to its skeleton.

setRigLookAndFeel(name, look number) — There are a few different looks for HIK rigs. In batch I found it nice to set the one I preferred before saving files out for animation fixes.

mayaHIKgetInputType — Returns 0 if input type is Stance Pose, 1 if input type is skeleton or control rig (I guess this means “self”), and 2 if input is another character.

mayaHIKsetCharacterInput(character, input character) — For retargeting, allows you to set the input of one character to be another in the scene.

characterizationCreate() — Creates a new Characterization node. You can rename the node and then make the UI recognize the new name with the following command.

characterizationToolUICmd — Useful for setting the current character name: characterizationToolUICmd -edit -setcurrentcharname [name]

setCharacterObject(object name, character name, characterization number, 0) — I don’t think I’ve seen this documented elsewhere, but this does the connecting during the Characterization phase from your joints into the character node. It appears the connections are specific and need to be fit into particular indices in a message attribute array, so the “characterization number” is something you need to figure out ahead of time if you’re doing a large number of these in batch. Some important numbers:

Ground

0

Left Thigh

2

Left Calf

3

Left Foot

4

Left Toe

118

Right Foot

7

Right Toe

142

Right Calf

6

Right Thigh

5

Pelvis / Center of Motion

1

Left Clavicle

18

Right Clavicle

19

Left UpperArm

9

Left Forearm

10

Left Hand

11

Right UpperArm

12

Right Forearm

13

Right Hand

14

Neck Base

20

Head

15

The nice thing about this is that once you know all the numbers, you can slide them into attributes on the joints in your skeleton and use that data to apply characterizations later on.

Going forwards, if I were to use the same tools again in another production (and in cases where animation needs to be transferred between two differing skeletal hierarchies, it would make sense), I think I’d pull the code apart a bit more and have a look at how the connections are made at the lowest level, then rewrite a chunk of this code in Python.

One more thing: using the standard functions, sometimes the UI will take a bit to catch up. Unfortunately, this means that functions which take inputs from the relevant UI will fail in situations where running them manually will work fine. EvalDeferred didn’t fix this issue for me every time, possibly because of how Qt and Maya are connected together and how they both do lazy updates at different times. I haven’t delved much into PyQt and Maya’s Qt underpinnings just yet, but the updating behavior is something for further study. In the interim, I found that using the maya.utils.processIdleEvents function to make sure all events were taken care of after doing major steps in the characterization or baking processes helped the UI catch up.

We’ve had a number of new hires recently at March. Our new Character Lead showed me something the other day that on the surface of it was so simple, but the application of it is likely to change the way I work entirely with regards to character workflow.

I’m not a bad modeler. (I’d prove it with some images, but all my coolest stuff is still under NDA.) I’ve spent the last year or so focusing on topology flow for animation, and until about a week ago I thought I was doing alright.

But yesterday I was watching the Character Lead remodel (or rather, reshape) a character on our show. The mesh is much more dense than I’d expected, and his technique for doing mass changes in large sections of verts is very interesting (similar to how I do retopology in Blender).

While the new application of that modeling technique is going to be very useful to me when I return to modeling, what really got me was when I asked him about workflow and on keeping triangles out of the mesh. His answer? Add edge loops / splits to the model and force the triangles into quads; don’t be afraid to make the mesh higher resolution.

I ended up thinking about that for the rest of the day. It echoes a conversation I had with Mike years ago when I was dithering over the purchase of my current MacBook Pro. He was pushing for me to get it because he thought my work was being limited too much by hardware considerations. At the time I hadn’t considered that I was doing more work than was necesssary on my 12″ Powerbook, building scenes with a lot of structure for keeping a limited number of polygons on screen to keep the interaction speed usable. When I moved to the new laptop and loaded the animation I was working on at the time, the difference was night and day: the file in which I was previously hiding so many things and using low-res proxies now ran at full speed with final geometry. I realized Mike had been right all along (as he is with many things), and that simple hardware change had a fundamental and lasting effect on how I work and how my work has developed.

However, that nagging sense that things can always be lighter, more efficient, has never really left. I model defensively for rigging– there are always exactly enough loops and lines for the shape I want, but no more. I try to keep poly count low so things like skin clusters and other deformers don’t work overtime, and so that weight painting isn’t painful. While these are valid concerns, the conversation I had yesterday made me realize that there’s a fine balance between keeping things light and having enough polys in there to make modeling and blend shape building easy.

I guess the point is, the next model I make is going to be significantly higher poly. And I need to always be aware of my old habits and whether or not they hold me back as I continue to hone my skills. When it comes to animation, don’t fear the poly!

Been too busy at work to thing about anything other than fur and rendering, buy a lot’s happened in the past few days in 3D and with school.

For starters, the story for my short film has totally changed. ^_^ It’s become a merger between the story of the grandmother and the cockroach, and the original Ramswoole maide story. I’ll post more about it when I have a finished leica reel, which should be some time next week.

I was finally able to get the Wipix no-flip leg working in Blender and that makes me happy. I still need to finish the rig and get Briar weighted, but I’m feeling a lot better about my decision to go with Blender for this short. Every time I have time to sit down and work in it, things just flow along.

As far as 3D news goes, Modo 4’s first set of preview vids are up at Luxology. Normally I watch Modo with only a passing interest, but there’s one video where a few thousand instances of a Rhino are spun around in the preview renderer, with full radiosity / GI going and volumetric lighting. It’s not unlike the speed of FPrime, although the instancing is something the old LW couldn’t do without third-party plugins (and I don’t think FPrime worked with them).

But that brings me to the second big CG thing of the week: Lightwave Core was announced and Newtek is already accepting pre-orders. There was a serious snafu with how the reveal went, but what was shown has me pretty excited– essentially an underlying architecture not unlike that of Houdini / Maya, but with Lightwave workflows and a fully-open C++ / Python SDK. It’s also using the Collads format as its regular scene format, which means it’s already a step ahead of all other packages with regards to interoperability. They said they’ve made a few extensions to the format, so just how compatible the LW Core files will be with other Collada-reading apps remains to be seen, but it’s a step in the right direction (IE: away from the horribly flawed and closed-source FBX).

There were a lot of buzzwords bandied around, and you can see the full tech FAQ on the new Core site. If you preorder (they call it purhasing a HardCore membership; man was that a bad name choice) you get access to the betas, with an apparent release date of the first build sometime in Q1. I’m remaining cautiously optimistic. If they deliver, they’ll be in a good place to pick up disgruntled users of Maya and XSI.

Okay, I’m actually hitting “post” this time; I have about three drafts on my iPhone that are now irrelevant. ^_^