I would rather get fired than use bullet - nDynamics has your back for the vast majority of things, and it’s much easier to integrate back into other effects, or even just other control systems in your rig.

Sticking a transform onto a curve of nHair is easy - use a pointOnCurveInfo node to find your position. If you want it to rotate and align to the curve, it’s a bit more involved - the PCI gives you tangent and normal information, which you can feed directly into an aimConstraint node as aimVector and upVector (although it’s worth noting that using this for an upvector can sometimes become unpredictable, and I usually use a separate upCurve).

There isn’t really an elegant way of generating the nHair itself other than clicking “create hair system”, and depending on a follicle is annoying, but if you work only with the nurbsCurve shape data (better yet only in local space), it’s very reliable.

Attaching things to nCloth can get annoying, but only if you try to do to much at the attachment stage itself. If you want a solution that will be robust to changes in topology your best bet is to use the UV features of a follicle (and eat that meaty performance hit), but if it’s just a basic tech mesh, you can access the local-space positions of a polygon mesh directly in the controlPoints attribute. You can then connect these to locators or (my favourite) the control points of a nurbs patch, from which you can use a pointOnSurface info and generate a frame in the same way as with the curve. Bit more involved, but it gives a smoother rivet on a deforming mesh.

Dynamics can be a very helpful tool, even a core part of a rig when used properly, and I’d love to see more people incorporating it. Hope this helps.

PS. also like the single coolest feature of nHair to me is that it lets you rivet either the root, the tip, both or neither, for free. Work out which dynamic settings are most important to you and wrap them up with a controller, and you have a very artist-friendly object that can achieve engaging motion in a fraction of the keyframing time.

PPS. This workflow relies a lot on passing nurbs and mesh information from one shape node to the other, and aside from some weird gotchas involving user viewport interaction and tweaking, I don’t know how efficient this is with the new parallel evaluation. I guarantee it’s faster than expressions though.

Do you have any additional thoughts on simulating a proxy mesh (for rigid bodies) and then transferring the motion to the high res mesh? For example let’s say you simulate 10 dice rolling on a table (maybe even with all dice merged into one object to use polygon shells). With the vertex rivet method you would choose 3 vertex ID’s for each dice, define the up-vector (or in the case of my script create a pole vector constraint + bone) and then constraint the high res object to each rivet setup. Is this the only possible workflow? It seems very… unelegant.

It’s an interesting problem - for starters, I would definitely use polygon shells, as not only is it faster but it will give a more complete (?) simulation if all your objects are merged - air turbulence and I think fluids work better this way.

Converting between spaces never really feels elegant to do, and I’m still looking for better ways. Again, this vertex method is not robust to changes in topology or point numbering (it might be possible to use transfer attributes, but I haven’t tested it) - for this method, unless you are absolutely sure that you will never have any more or fewer dice, I would probably use follicles, either on separate UV sets or one with every cube laid out together. Ten follicles in your scene will have a very small effect on performance (compared to a physics sim at least), and it’s probably the most future-proof and stable option anyway. Or a simple normal constraint might work, if you’re not bothered about performance at all.

That’s definitely not the only possible workflow either - go explore some others! Random maya nodes do the craziest stuff.