Follow our progress!

Just a quick post. I’ve been messing around a bit with facial rigging.

The tricky bit is that it needs to fit within the framework of the autorigging system, and needs to be fast to set up for background characters without much manual work, and also needs to be flexible enough to let us get really top-notch results for main characters.

After discussions with Angela and some tests, I’ve settled on a hybrid approach between bones and shape keys. The idea is that even with only bones, the face rig should work well enough for background characters. But it will be supplemented with shape keys for main characters. Things like lip roll, nostril flare, squinting around the eyes, etc. will be done with shape keys. Corrective shape keys will also be used to preserve volume and define creases better.

It’s not perfect, and the facial expressions don’t necessarily match her character, but it’s a good proof-of-concept, I think.

–Nathan

This entry was posted
on Sunday, December 13th, 2009 at 3:49 pm and is filed under News.
You can follow any responses to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.

I think this could be more organic if the top part of the head had more movement because right now it looks a little like a robot.

As an exercise try doing the biggest smile you have ever done. You should notice that your whole head moves and not just the lower part. I think that small movements would make the whole thing look more natural.

Nice job Nathan. If this is an only bones rig without any shapekeys then big congratulation. And I am even more impressed that this should be included in Blender 2.5 autorigging system! Blender with FaceRobot and BipedRig features 🙂

If main character has this combined with shapekeys then this will be very usefull work-flow for real-studio production.

If you would like to combine this with single bones influence for detail level control check my PM at BlenderArtists.org.

@Andrew Fenn:
As tinonetic pointed out, this is just a test of the mouth area in isolation.

@JiriH:
Just to nit-pick: there isn’t a Blender 2.5 auto-rigging system. Rather, there is separate set of Python scripts being written for doing auto rigging for Durian. It is not intended to be a built-in feature. But, of course, it is being made with the intent of being useful to people after Durian as well.

@MeshWeaver:
OGV is for “ogg video”. It’s the default file extension that ffmpeg2theora outputs with. I just stick with that.
Technically, I could just use *.ogg instead. But as you pointed out, that’s most associated with ogg audio.

Well, the difference is whether it’s an officially maintained part of Blender.

There are many ways one could build an autorigging system, and they all suit different use-cases. I don’t think it’s really appropriate to have ours be an officially maintained part of Blender. Especially since we still have yet to see how it holds up over the course of our production (I’m sure we’ll find a lot of mistakes we made, and discover ways that we could have done things better).

“… we still have yet to see how it holds up over the course of our production (I’m sure we’ll find a lot of mistakes we made, and discover ways that we could have done things better).”

What do they say, “the second piece is always better” (because everybody learns from their mistakes).

It would be very very helpful, if the team members could publicly document all the things that they would do different, and why. Although it’s already an outstanding opportunity to learn from the blend files etc., it’s difficult to see the tradeoffs from the outside as well as the experience from actually putting up with these tradeoffs through to the final product.

I’m pretty sure continued development/refinement of the released python scripts & utilities would get a boost if there’s a review (20/20 hindsight report) based on production experience.

Just my thoughts. And many, many thanks to all the people involved with Durian, but also with Blender development in general. Progress is absolutely amazing on the creative as well as on the code front.

In max we used to create a very simple mesh from the character and then create the shape keys for that mesh. Then this mesh drive the characters using shrinkwrap. So you have one mesh and with shrinkwrap you can drive every character you want (after some correction in proportions if needed). But I didn’t tried yet in Blender if this will work or not. Sorry if not.

Maybe it’s me, but what I find so cool is that all of this is being done with FREE software! We are all being moved in someway by this open source experience. There’s like this kind of organic movement that’s just asking the individual to dream! Dude this rocks!