The July 2014 update is another big one introducing new nodes like »Fit NURBS«,
»Curls advanced«, »Curvature Amplifier«, »Grouping in
Form« »Follow CurveList« »Stretch Hair« »Hair
Filler Rounded« — for a full description see the si-community thread and the updated documentation.

The Kristinka Hair toolset is a new and unique way to set up, style and simulate hair
using ICE nodes. A set of fully customizable ICE nodes Scalable, from only a few basic compounds for building basic hair, to very complex
structures. Hair styling that always considers the whole shape of the hair. Styling works well for short and for long hair. Unlimited hair length, unlimited number of hair segments. Automatic, procedural generation of details - always with full control. Locks, clumps,
curls, turbulence, are created by ICE compounds Additional modifiers, like cutting hairs by external geometry, constant strand length
for key frame animation, resampling and subdividing strands, morphing with another
hair, modulating hair's distribution over emitter, so user can increase density on
most visible areas Full support for the Sofimage's built-in Strand Dynamics Framework simulation engine. Only factory ICE nodes were used, it should work nicely with any Softimage version
from 7.01 on.

I see. Yes Cinema 4D was something I was going to look into as well. I wanted to do something along the lines of the FLV file on your site of the woman with long hair giving a kick. Just so I am clear. Do you mean that it would be easier to animate the hair rather than simulate? If so I may just do that. I also heard in the xsi mail list of using Ice syflex for simulation. I am not sure if that wouild be on the follow nurbs items or on hair from curves. Thanks for explaining. I am just doing hair tests at the moment to see what would work best for me.
John

well I already have a few setups, just want to keep them a day of two, maybe I'll find some improvement. Don't give up, now

'giving a kick' setup, as well as setups for attaching to mesh stripes ( that is, for Syflex), both already were able in some version of kH.

What I wanted to say, it's really hard to find a some unified method for simulation, without forcing people to go deeper in ICE stuff.

Syflex and mesh stripes could be a total solution, as it provides self-collision, well defined vectors for kH modifiers on top of sim, all features of 'beast' engine after all. But it's not that fast, nicely to say. Also, getting a hair volume solely by self-collision, in many cases, I think this is a bit masochistic.

I wouldn't animate it. I wouldn't simulate geometry used for styling, for me this looks too stiff ( and it's not easy as it looking at first). 'Centrifugal' motion, so typical for hair, is lost when hairs were attached to just a few pieces of cloth.

The renderer "turtle" used by this scene, is not currently available. The "turtle" renderer will be used instead.

so finally there are two setups for download. For 'One Point Simulation' that coming with kH, and factory Strand Dynamics Framework. Both have a two separate point clouds. 'Point cloud_simulated', prepared for caching, and 'point cloud_load_cache', prepared for cache load. In order to see something meaningful, you'll need to cache the simulated point cloud, and load cache to 'load_cache' one. Here, caching process takes about minute for SDF ( Strand Dynamics Framework), much less for OPS (one point simulation), for about 200 frames.

'Load cache' point cloud: this is created from simulated one - simulation stack is deleted, I've used 'cache on file' node for loading. This allows to have everything in a single ICE tree. However, you'll need to replace the emitter mesh with static copy (no deformations). In this single ICE tree there is:
1: basic emitter node, kH Follow NURBS or another 'form' node. 'Form' node still needs to be present, because it creates static vectors for later deformations.
2: load cache node
3: node for applying the particle motion to strands, for OPS, or... node for calculating the 'dynamic' deformation vectors, for SDF.
4. modifiers (bend, curls, randomize)
5. hair filler

A few words of simulations:

'One Point Simulation' simulates only on particles, it doesn't simulate on strands. In post simulation, spring motion is applied to strands, motion on particles is disabled.
It just bends the strands. For example, you can't get 'S' like shape. However, it's 'by design' faster than 'real' strand simulation, I'd believe it's more predictable too.
There are only three simulation parameters, typical for simple spring simulations: weight of linear and angular motion, weight of drag force. Preventing collision to external objects is still possible, but only in post-simulation, more like some small correction.

Factory strand dynamics framework is a full simulation on strands. Collision is calculated together with simulation. There is no self-collision, anyway. In movie a few post above, I've mixed SDF sim with transformed original strand shape. In simulated ICE tree, there is a node called 'kH3 restore original shape'. It's pretty much the same as 'strand groom force' from Melena/MT strands, but this time it's a 'brute force'. With full weight, this node will just kill the simulation, still transforming the strands by emitter's motion. You can weight this one along strands, for example, setting a higher weight at roots, zero weight at tips. It seems it also helping in calming the SDF sim.
Tip: keep weight less than one, even something like 0.99. Don't kill simulation completely (otherwise, I'm afraid ICE will come with optimizations, and whole chain of side-effects begins - at least I think so).

In 'cache_load' ICE tree of SDF, you'll see 'kH3 Apply Strand Orientation Delta' node, which re-calculates up-vector and tangent. In kH, these vectors are called 'kH_strandUpVector' and 'kh_Strand Tangent'. They are custom attributes, to prevent ICE to kill them on it's own choice. These two vectors are very important for correct behavior of modifiers (curls, bend). Method of this node is really basic - if deformed and transformed strand shape differs too much (more than 180 degree, I think), curl ( or another modifier) will suddenly rotate for 180 degree.

For those users who want better, I think the golden mine is a Gillaume Laforge's Deform Strand Extrusion. I think this is the only public available node with, let's call it a 'smooth flip' (similar to standard 'deform by curve' OP). Of course, some adaptation would be required. In movie, I've used something along line of mentioned node, but still I don't feel ready to share it.

About collision nodes:

I've used 'Test Collision with Surface" from SDF - the same as "push strands outside the geo", or else. Node is looking to closest location on surface. According to dot-product of point normal and direction to closest location, it decides where to
push the strands. So, with some exaggerated motion this could be unwanted, opposite side. So, 'safe' speed could be a less than half length of collider's volume, per one sampling interval (that is, one frame. maybe... sub-frame in newest SI). For
motion in samples, let's say the head or arm is enough 'fat', ears or fingers, definitively not.

first of all thank you very much to make all your compounds public available. I start start digging into them and need to say that they look really impressive, at least the examples you wrapped up with the compounds.

I would just like to ask you a few general and workflow questions, as I might need to change some of my habits in order to be able to use your compounds and/or work with strands.

a. Coming from polygon modeling I usually need subpatching to gain the smooth look (after deformation). Also some displacement (zBrush displacement maps) might be needed. How do I grow fur on the subpatched model? Would I need to convert the polymeshes into nurbs? Or would I use a subpatechd poly model (frozen) and deform this with my animated polygon mesh?

b. As I need a lot of guides which will be grown dynamically in the first place, how would I store these guides strands in order to have an efficient workflow? Would I convert them into curves (which I could probably save out with an object) and later on transform them back into guide strands? I plan to have two point clouds. One for the guides strands and one for the actual visible hair.

c. As I want to be able to clump the fur in a very controlable manner: Do I need to grow clump strands which serve as the guides for the clumping or can I alter the exact radius for clumping via a weight map (e.g. different clumping radius in areas)? I guess the clumping might need the guide strands as each clump would need a own guide for dynamics? In case I need clump guides, could I pre-create a point cloud with the root positions like the guide strands? Played around with even distribution of points on a mesh, as the clumping guides need a very similar distance. Regular SI emitting will be too irregular.

a. Coming from polygon modeling I usually need subpatching to gain the smooth look (after deformation). Also some displacement (zBrush displacement maps) might be needed. How do I grow fur on the subpatched model? Would I need to convert the polymeshes into nurbs? Or would I use a subpatechd poly model (frozen) and deform this with my animated polygon mesh?

b. As I need a lot of guides which will be grown dynamically in the first place, how would I store these guides strands in order to have an efficient workflow? Would I convert them into curves (which I could probably save out with an object) and later on transform them back into guide strands? I plan to have two point clouds. One for the guides strands and one for the actual visible hair.

c. As I want to be able to clump the fur in a very controlable manner: Do I need to grow clump strands which serve as the guides for the clumping or can I alter the exact radius for clumping via a weight map (e.g. different clumping radius in areas)? I guess the clumping might need the guide strands as each clump would need a own guide for dynamics? In case I need clump guides, could I pre-create a point cloud with the root positions like the guide strands? Played around with even distribution of points on a mesh, as the clumping guides need a very similar distance. Regular SI emitting will be too irregular.

Hi there,

a: afaik, ICE is unable to emit form subdivided geo. For emitting from complex geometry (not possible to describe with NURBS), frozen subdivision is proffered way in kH3. Poly.Mesh> Subdivide or something. Exactly, it may be possible to emulate subdivs in 2012, but I won't be sure about performance of this, nicely to say.

There are a two ways of emission in kH3, both for meshes and nurbs: guides and filler, and 'random'. 'Random' from mesh is a ICE 'generate sample set' node. For a lot of hairs with simple styling - that is, fur - proffered way is to do *not* use any guides. There is a predefined 'strand profile' for each hair, this profile just follows the particle orientation. I found this much faster in this case. I guessing, because there is no need to query the another particles, particle don't need to know anything about others.
Following this method, simple spring simulation (in kH3 it's called 'one point' sim), simulates only on particles, not on strands, strands are just bended.

Once you have defined a some kind of 'build linearly interpolated array', 'clumping' is trivial to do in ICE, you just use lerp (linear interpolate node), lerp is modulated by predefined 'build linearly in terpolated array'. This array is called 'strand ratio' in KH3 - name is from ICE workshops from early ICE days.
One simple and fast way is to clone the particles, spread them around original, finally interpolate back to original. This is called 'splay hair' in kH3.

Example of all what I said, could be 'deform hair' sample. All that is only about 'fur' in kH3, long hair is another story.

b: according to 'a' , I've tried to avoid curves as much as possible, as I never found them easy to tweak. While ago, I've been happy to found the same design decision in Max'es Hair Farm plugin.

c: you could emit the guides from vertices, just as XSI hair does. In 'guides and filler' mode, kH3 also emit from vertices (or procedurally created triangles from NURBS). However, there is a difference, kH3 creates three guides for each triangle. Later on, filler just fills the triangle space with interpolated clones. This allows creation of hair interpolation groups 'on the fly', actually these groups are already defined. Simple cloning in this method allows to use just one ICE point cloud for everything.
XSI hair has a special process here - by running a 'hair split' a bunch of new guides is created on border of hair interpolation group, merging then back deletes them (at least I think so), so on.

kH2 and before, used a modification of factory 'emit filler strands' (or like). But there were problems, just with creation of hair interpolation groups. With some heavy styling, particles were unable to find all three guides, which caused problems in rendering. Missing strands or so. IMHO method in 'emit filler strands' is nice because it allows proper interpolation even if points are outside of triangle. Anyway, for more complex styling, having enough predefined data, I think someone would like something more robust and simple cloning, instead of geo query all the time.

Also, there was a not-so-random emission method in kH2, ray-casting from virtually created grid to mesh, 'flattened' to it's UV projection. But this one still had 'dead ray-casting shots' in memory, somehow (even I think I've tried all combinations along line of removing from array, selecting in array, so on..). This produced some kind of infinite bounding box, also another chain of problems in rendering, like unusable shadow maps from directional lights
(in case of directional light, MR uses scene extent for shadow map area).

Long story short, as long you're in ICE-node-only kindergarten, and you want it to deal with huge counts of hairs, and render them properly, imho you want as much more 'simple and stupid' solution.

The renderer "turtle" used by this scene, is not currently available. The "turtle" renderer will be used instead.

Like you said, it seems not to be possible to emit particles or strands from subdivided geometry. A subdivided and frozen mesh seems to be the solution. For a still, no problem. But for an animation? How would I deform my hair mesh with the animated mesh? For proper results I guess there is no way around animating the (low res) mesh and then subdivide it afterwards (increase the sub d levels via the + key). Hull deformation is way too sloppy. How would this be done inside softimage?

The other question which remains open is how to "store" the guide strands? I did have so much work to do retouching furry objects as the short, almost tangent hair usually slipped throufgh the mesh leaving bald spots. I now came up with a way to grow the hair dynamically, so that it flows correctly around the mesh, never intersecting with it. This can take a few minutes. At least several seconds per frame. A workflow killer. Actually I just want to do this once and than "save" the strands in the correct position.

I thought converting them into curves might be a good idea, as these can be saved with SI. Converting them back doesn't seem to be difficult, but the only add on which converts strands into curves merges all the curves into on curve. Converting them back seems difficult.

Another idea is to have as many vertex colors or vertex color maps (for each segment of the strand one map) so that I can store the vector/strand positions relative to the root. With a simple object this might still work, with a high poly one probably baking this out to an open exr map might be a solution.

Can you think of any alternatives, so that the once simulated guide strands are immediately there after loading the scene without any nee simulations? Or can you think of a way to go the curve way?

P.S.: For the storage of the clumping guide position: Shouldn't it be possible to create a new topology with the new ICE modeling nodes with the smoothed out particle positions? At least that could be a way to store this kind of data. As I never touched the modeling compounds I didn't succeed with creating a topology from just the point positions (add vertex). Sounded easy, but I guess I need to figure this stuff out.

But for an animation? How would I deform my hair mesh with the animated mesh? For proper results I guess there is no way around animating the (low res) mesh and then subdivide it afterwards (increase the sub d levels via the + key). Hull deformation is way too sloppy. How would this be done inside softimage?

GATOR is your friend, for all sort of transferring the 'classic' operators. Also, in Softimage, model > poly.mesh > subdivision, transfers the classic operators too. But I'd choose GATOR, it's not topology. dependant.
In mentioned 'deform hair' sample from kH3, hair is deformed by deformed copy of emitter mesh.
Shouldn't be hard to make some special ICE cage deformer, it only depends to which level you want to keep the fidelity against performance..

Just me, again, I'd try to avoid curves.

The renderer "turtle" used by this scene, is not currently available. The "turtle" renderer will be used instead.

O.k. But is there an alternative for storing the "grown" strands? A curve would survive storing of the softimage file, a strand not. Would I cache the "growth simulation" and reload it at a certain frame. Just wonder whether I could apply dynamics on top of this. That's the reason why I thought creating polygons from strands could help, if there is a way to convert them back into strands.

In the end it comes to storing 10-20 positions per strand. Maps would cost a lot of time and aren't easy to change. Loads of baking. Something like a "Save array" (strandpositions) would come in really handy, if it works for the whole mesh.

Pancho wrote:O.k. But is there an alternative for storing the "grown" strands? A curve would survive storing of the softimage file, a strand not. Would I cache the "growth simulation" and reload it at a certain frame. Just wonder whether I could apply dynamics on top of this. That's the reason why I thought creating polygons from strands could help, if there is a way to convert them back into strands.

In the end it comes to storing 10-20 positions per strand. Maps would cost a lot of time and aren't easy to change. Loads of baking. Something like a "Save array" (strandpositions) would come in really handy, if it works for the whole mesh.

Unless guides are procedurally modeled in ICE, and they resides in modeling stack... I'm afraid no, at least no only-with-ICE-nodes. That's drawback of always live operators.
Curve looks like a most logical way.
Here I feel free to boring you with another sample from kH3, called 'in between curves'. It has what is usually called 'tangent space interpolation'. As long as you have a tangent map,
strands interpolates 'around' the curved surface, so you need much smaller count of curves.
Didn't tried this one with fur, anyway....

Regarding strand count, some external renderers for SI, exactly 3delight that I know, are able to render curves instead of visible strand segments - so strand count could be even 4-5 or something, for simple shape.

The renderer "turtle" used by this scene, is not currently available. The "turtle" renderer will be used instead.

Oh the powah! That inbetween curves file looks the friendliest so far and since it has fewer curves with bezier handles to shape the base hair pose then I think I'd like to see what I could do with this technique. It seems way better than any other guide creation system I've used.

Mathaeus, what does this cube filter do in that scene? Anyway, I've been reading all these Kristinka threads today. Extremely interesting. I just have to filter out the stuff that I can actually use and animate from the stuff that is well beyond me at the moment. Thanks!

So the idea with the dynamics with these strands is to maybe create box deformers around the head so I can animate the (shorter) locks of hair? I think I'd like to animate a few strand deformers over sticking strips of geometry to the strands some how and using syflex.

So I've been doing my best to understand the anatomy of these hair setups. (I find this very interesting in case you guys haven't noticed. ) I really want to find a way I can hand animate these nurbs deformers but without creating these "over-head" situations you keep warning us about. Dynamics are cool but I don't think I'd have the time or desire to really sit and tweak settings all day hoping the collisions look good after a 10 minute calculation. (I'll save that time for fluid Point cloud sims) ; so for someone like me I think I can make something look good, stylized and interesting by keyframing simple motions for the deformers.

So lets start with this first setup in the attached pict. What you have is:
(My simplified artist view of this.)

-) A Point cloud stuck onto the contour of a nurbs surface mesh.

-) This point cloud draws strands and conforms to the direction of some curves that somehow represent another cloud of points. The point cloud being the points on the curve?

-) These point cloud curves move-to and follow the contour of another group of nurbs surfaces/deformers.

-) Some of these nurbs surfaces are constrained to the two box nulls in the scene and move with it. I guess they are there as a proxy for where the head and body could go.

Does that loosely describe what I'm seeing here?

So my question is, what would be the best way to keyframe these nurbs deformers (not procedurally but by eyeballing them and placing by hand), which in turn would drive the hair strands. I know how I could do it, but I'm not sure which way would be more efficient; since you often talk about how all these operator stacks are "live" and are therefore constantly being computed per frame. . . . So I guess that means that the more keyframes that get applied to the nurbs deformers "guide containers" we suffer from double transforms or something like that? Or does this overhead only happen if we animate the curves before the nurbs surfaces?

JPWestmas wrote:
So my question is, what would be the best way to keyframe these nurbs deformers (not procedurally but by eyeballing them and placing by hand), which in turn would drive the hair strands. I know how I could do it, but I'm not sure which way would be more efficient; since you often talk about how all these operator stacks are "live" and are therefore constantly being computed per frame. . . . So I guess that means that the more keyframes that get applied to the nurbs deformers "guide containers" we suffer from double transforms or something like that? Or does this overhead only happen if we animate the curves before the nurbs surfaces?

Thanks for any hints on this.

Hello,

I hope I'll add some setup, tomorrow or so. Not exactly animating the 'form generators', this is tricky because of good number of distance - dependent queries, moving them could cause 'jumping' from one to another.
More like wrapper mesh, which you generate from NURBS or something else. Just need to take look which way gives a nicer performance.

Thank You for playing with this stuff.

The renderer "turtle" used by this scene, is not currently available. The "turtle" renderer will be used instead.

JPWestmas wrote:
So my question is, what would be the best way to keyframe these nurbs deformers (not procedurally but by eyeballing them and placing by hand), which in turn would drive the hair strands. I know how I could do it, but I'm not sure which way would be more efficient; since you often talk about how all these operator stacks are "live" and are therefore constantly being computed per frame. . . . So I guess that means that the more keyframes that get applied to the nurbs deformers "guide containers" we suffer from double transforms or something like that? Or does this overhead only happen if we animate the curves before the nurbs surfaces?

Thanks for any hints on this.

Hello,

I hope I'll add some setup, tomorrow or so. Not exactly animating the 'form generators', this is tricky because of good number of distance - dependent queries, moving them could cause 'jumping' from one to another.
More like wrapper mesh, which you generate from NURBS or something else. Just need to take look which way gives a nicer performance.

Thank You for playing with this stuff.

Yeah that would definitely help if you would do that.

I'll see what I can find. This is indeed a super nice way to setup a complex wig. Thanks again for all this.