Ok, so I was wondering about the topology of the shoulder blade area. I’ve been taught that when modeling a human model, you should never model the top shoulder of the model, and should leave it flat for better deformation for animation. Is this wrong?

But then I see a lot of posts here where the shoulder muscle on the top is made. I’m very confused. What route should I follow to get the best results both visually and animation wise?

Hi guys! I copied a head I saw somewhere, but I forgot where. It’s basically a basemesh to add subdivisions and loops to but what I’d like to know is if it has good topology and if I’m on the right track… Would really like suggestions about the nose also, how to add that that circular loop that goes from beneath the nose around side of the nostril into the hole?

No, I mean a mesh dense enough to allow you to actually sculpt the expressions into and keep the underlying shape.

For example… Notice how the loops on the bridge of the nose or under the lower eyelids are aligned to the wrinkles that would form there, and not to the static forms of the neutral shape. Those forms are only implied, because they won’t always be there.

There are ways to deal with such a dense mesh, after all there was no Mudbox or Zbrush when Weta made Gollum, or PaintDeform from Daniel Pook-Kolb, and the Wrap deformer was a lot slower as well. Nowadays it’s a lot easier to deal with dense geometry.
Also, I suppose they’ve built a less detailed version and subdivided it to get this result. But it shows that the nicer facial animation you want to get, the more geometry it will probably require.

There are ways to deal with such a dense mesh, after all there was no Mudbox or Zbrush when Weta made Gollum, or PaintDeform from Daniel Pook-Kolb, and the Wrap deformer was a lot slower as well. Nowadays it’s a lot easier to deal with dense geometry.
Also, I suppose they’ve built a less detailed version and subdivided it to get this result. But it shows that the nicer facial animation you want to get, the more geometry it will probably require.

Which one, that they’ve probably subdivided a relatively dense model to get this super dense model? It just comes with experience, you look at enough models with subfiv turned on and you get to know how some patterns are created. Although it’s still more of a suspicion… I’m off to sleep now but I’ll give this image another look tomorrow to explain some of the stuff I think about it…

Which one, that they’ve probably subdivided a relatively dense model to get this super dense model? It just comes with experience, you look at enough models with subfiv turned on and you get to know how some patterns are created. Although it’s still more of a suspicion… I’m off to sleep now but I’ll give this image another look tomorrow to explain some of the stuff I think about it…

I’m just confused as to why anyone would want to try to animate a mesh with millions of polygons when they could simply use subdiv and keep the poly count under 100k.

Those aren’t millions of polygons, I’d say it’s about 20K for the head in itself.
Then they don’t need muscle simulations and such for animation so they probably have a low-res segmented model of the body put together with this face rig and the result should be able to run in real time.

And the mesh is this dense because otherwise they wouldn’t get a detailed enough representation. The virtual production used models with significantly less detail for the realtime feedback on set because there it was enough for the director to work with.

Tamas is correct. Jeff Unay and his team worked with extremely high res meshes to sculpt in wrinkles and other details and used no displacement maps at all. This was to allow facial wrinkling to occur on the mesh itself rather than it dissolving on and off with displacement maps. Animation director Andy Jones said that a facial rig of this complexity with such a dense mesh couldn’t have been attempted 5 years ago as the processing power and graphic card speeds simply didn’t exist.

This way of doing things just replaces the need for using driven-displacement (wrinkle) maps on the face. They’ll still be using regular ol’ displacement maps on the face to cover the finer detail in the skin.

At a guess I’d say this is the level 1 mesh. The modeller responsible for the base shape will still probably be working on a level 0 mesh.

Peter Syomka (http://syomka.cgsociety.org/gallery/) modelled Neytiri’s face - and I think Florian Fernandez modelled the body (although it’s been a while, so things could have been reassigned after I left).

As for the shapes, it’s fairly easy and straightforward to set up a wrap deformer and use a lower res version to create ‘sketch’ versions of each blendshape. Then move on and refine the details, add wrinkles etc. on the final mesh.

I think I’ve mentioned here before that I’m also re-using the lower res mesh (and its shapes) nowadays with relatively OK results… Then again it’d probably not do for anything even just remotely close to the level of Avatar

To make it a bit more clear.
You dont need a mesh just as dens as this above one from avatar… this is a tessalated mesh. Its not the base. That is more like 1/4 the polycount. so that is what you model. Than you can smooth it out, and if nessesary make the blendshapes or even angle driven displacement-normal maps on a tesselated level.

Make can model in all the basic creases, and animation loops you need. I this avatar moddel its all a bit unclear because we are not actually looking at the modelled level. You can see the major creases made below the eye, top brow… forhead, and below the nose. Particularly for her snear.

Take in account what you´re character is going to do, and base you´re density and detail on that.

So in short… just wanted to point out, that you dont have to model in such density.
Maybe you could even tesselate you´re lowpoly made model, edit that mesh, ad the detail in. and than tesselate that again for its final look for render, and displacement shapes.

My impression is that this is a completely different version of Jake, the one that the Lightstorm guys have built for the virtual production workflow and used it in MotionBuilder. I’d say it hasn’t got much to do with the final model and is directly driven with the face mocap transform data, in real time.

The final, movie version is at least 10-15x as many polygons and uses blend shapes for the facial rig instead.