This is my first experiment in making a proper bone based facial animation rig. This rig doesn't contain a single morph target and all of the deformation is done with skin envelopes.
Previously, I've often used morph targets coupled with jaw and head bones, which works fine but is pretty laborious to set up as you all probably know. This experiment has been probably even more work but I think that's because it's the first ever time I've done it. I'm hoping that this setup can be transposed to other characters with relative ease.
The benefits are that the face looks less like a collection of poses, it keeps its volume nicely through different expressions and manipulating it with controls on the actual face makes it just plain easy!
The hazards are that there is a lot of tweaking of skin envelopes to get the right part of the face following the right bones and getting a proper "oo" shape is the toughest part (it's the same with morphs too though).
It's still work in progress as the worried brow shape needs work, I'll add a tongue and the "oo" could be a little better.

How are you Brad? You are doing some nice looking job again
For me bone based facerig is totally new thing so can you tell me how many nights did you rig it up? I can just imagine that skinning...hell.

now that looks realy realy nice. great work brad. you say the hazards are that there is alot of tweaking of skin envelopes. weather it be morph targets or bones for facial expressions both are going to be time consuming. it`s which gives the better results at the end of the day and having seen those .mov files i know what i`ll be using in future. can`t wait to see the finished rig.

I especially like how you set up your sliders with intuitive naming... and they work great in conjunction with the helpers on the face. do the helpers have limited movement (clamped to the slider range)?

Very impress with the setup , the mouth area is awesone like how evethin greact two the movement you totally loose the pose to pose lok you speak that you normally get with the morphs.

Impress how the movement of the controls of the face move in a proper direction without breaking the face and keeping the volume on the face. lovely manipulator than allow to play in the scren like clay.Are the bones moving on path or free in the 3d space.

i lIke the eyes and brows area but im missing tree controls for brown to get more specific poses.you stil can have one father than make the hol eyebronw goes up and dwon but wil be lovely to have tree extra point to control it.

the same with the eyelids wil be great.

You brain is amazing , ive been doing on the job like you know and aproach with moprhs and nice gui , that is coming nice but i endup doing script controller with ca for be shave again renaming.things like third morphs that actives when to other break the mesh and fix the problem.is verypowerfull but stil mishing the way you manipulateon screen more like clay.

so in your setup for a new rig you will have to redo the skin and reorganize the position of the bones in mine will be to redo the morphs . i dont know what aproach is quicker but for a good and accurate rig doing moprhs or bones should be the same amount of time.

the clever idea is have sytem that can be reuse for diferent meshes.mine actacking the new mesh and erasing the old one in al the targets the idea is to only redo the morphs the conection script and gui will stay.Your redoing the skin and moving the bones.

One good thing about bones is you can redo the face mesh easly and the system continue working, for morphs i end up doing edit poly on top of the mesh and pasting this editpoly to all the other moprh targets by script and reloading the targets.With moprhs is a bit tricky and need take care how to use edit poly on stack.

both sytem have good things .

well master your padawan continue learning in his new job , try to do stuff to show all this new ways I being doing last months.

Thanks for the compliments guys. To answer your questions:
The manipulators are point helpers like I have on the skeleton rig. The lines point to their parent and come from turning on the link display option (with "draw links as lines" turned on in the preferences).
These manipulator points have a position list with a position XYZ controller at the base with a position constraint on top of that. Let me give you a little text picture of the list:
-> Position XYZ
Position constraint
The position XYZ is the active controller so you can move it around but the position contraint on top cancels out that movement. What this does is give you the ability to use the manipulator's movement to drive other things while still having control over the position of it (by using the object the manipulator is constrained to, to determine the position of it).
The manipulators are two-way wired to the custom attribute sliders. The sliders are the ones that hold the keyframes and they have built in limits (though you can break those limits by changing the keyframe values). The limits on the manipulators are done in the wiring dialog.
For example, let's say I want to wire the jaw up/down slider to the x position of the manipulator. In the wiring dialog on the manipulator's side there will be the word jaw_up_down_slider (or something like that) and on the slider's side there will be X_Position. To limit the x-axis position of the manipulator between 0 and 100, on the manipulator's side I'll instead put in:
if jaw_up_down_slider < 0 then 0 else
if jaw_up_down_slider > 100 then 100 else
jaw_up_down_slider
This basically says that if the amount the manipulator moves is less than 0 then stay at 0 and if it's more than 100 stay at 100, otherwise, move by the same amount as the slider.
On the slider's side of the wire parameters dialog I enter:
if X_Position < 0 then 0 else
if X_Position > 100 then 100 else
X_Position
I'm still yet to understand in which situations doing only one side works and which situations it doesn't. This is just to make sure it works.
As you probably guessed, the "hidden stuff" layer is the guts of it all. It's mainly made up of points position constrainted between certain parts of the face with varying percentages. The bones are all stretchy with position and lookat constraints to various points (the bones are all set to stretch except for the bones at the lips which are set to squash).
The pink bones seen in the second video are mainly to create bulges. When certain points are moved closer together it pushes out a point a few of the pink bones are constrained to, simulating bulge. The amount of bulge and its falloff is controlled by the reaction manager. Most of these cheek bulges are done but some are still to do.
In 3D everything kinda has to work down a hierarchy and in this case the jaw moves the corners of the mouth up and down, the corners move the top and bottom lips when moved left and right and the parts of the lips in between the top and bottom and the corners are affected by everthing else. The bones are all constrained to follow all of these parts and they move the skin.
The skinning did take longer than I thought it would but that was because it was the first time I'd ever skinned a face. Now that I have a better grasp of what should influence what, setting it up for another character should be a lot quicker. I guess it's that same as morphs, the first time you do it you're not sure what shapes you need to make, but the more you do it, the more you learn which ones you'll need and which you won't.
I hope I've answered your questions. I try to post some more images about how it works more in depth.
:¬)

Thanks guys.
Since I first posted I've been putting the facial rig onto a character with the skekton rig already on it. I kind of expected it but when you link all of those constrained points onto a rig with a lot of constraints already there, it slowed to a crawl. Well, to be more precise, it was quite acceptable when moving around the arms, legs and head but would pause for a good 10 to 20 seconds before you could move or rotate the hips. Not good.
Hopefully 3dsmax 8 will continue its trend of rig speed increases, but until then I'm going to unlink the facial setup from the body and use two skin modifiers, one for the face and the one for the body on top - in effect using the facial setup in place of where the morpher would normally go.
It's already much quicker without it linked but the problem now is to get the manipulator controls to follow the face while the actual bones and points deforming it are back at the original skin pose position.
Just thought I'd leave you with a picture of the complete facial rig (ie. with the "hidden stuff" layer unhidden).
:¬)

Woow...that's a lot of green
Very cool rig though.
If you'd like to get this rig tested for performance with max8, you can consider passing it to Discreet. Or pass it to me or Paul Neale or anybody else you know is working with Discreet testing max8 and we can drop it into the right hands. (don't put it off for to long though, since there is only so much time left, if you want your rig to be tested)

Since I first posted I've been putting the facial rig onto a character with the skekton rig already on it. I kind of expected it but when you link all of those constrained points onto a rig with a lot of constraints already there, it slowed to a crawl.

hi brad,
what i would do here is not actaully link the facial rig to the body rig, what i would usually do is to add a morph modifier to the skinned mesh and skin a copy to the facial rig. now pick the face copy as a target for the main skinned mesh and put the value up to 100 (dont forget to check auto update targets) and leave it there, now any change you make facial rig will carry across, you can also easily add the controllers objects over and link them to the full body rig and wire them to drive the real controller objects on the facial rig. maybe you it will work for you also?

marktsang: Yeah, that'd probably work, though the second skin modifier works in pretty much the same way, facial deformation with body deformation on top of that.
As I mentioned before, the only problem I had with that was that you want the controls on the face to follow where the head actually goes. I was able to solve this using the good ol' expose transform helper to calculate the control's parent's local position in relation to a fixed copy of the head bone and using those values to position an object linked the moving head bone which the facial control are then linked to. I also used this method to get the rotation of the eyes in relation to the head bone to drive the eyelids that move with the eyeballs' rotation (did any of what I just said make sense?).
So now, the facial bones and constraints all stay fixed to their original position (not adding to the calculation of the body rig) and only the facial controls move with the body rig.
:¬)

Follow Us On:

The CGSociety

The CGSociety is the most respected and accessible global organization for creative digital artists. The CGS supports artists at every level by offering a range of services to connect, inform, educate and promote digital artists worldwide. More about us on TheArtSociety.com