The problem: storing a dag node component in a way that makes it easily callable and persistent.

As I’ve been both refactoring/optimizing our core libraries as well as updating locinator I came across this old issue. There are several ways of doing this, some better than others. Just been wrapping up rewrite of our attribute function library. A part of that was rolling out our msgList concept from cgmMeta to being outside meta as well as expanding on that with datList(more on that another day).

Short version

If you don’t care about the details and just wanna see code, grab the last master branch build of our tools and you can find the main functions here:

cgm.core.lib.attribute_utils.set_message/get_message

Walkthrough example of datList/msgList with new stuff — cgm.core.examples.help_datList_msgList.py

Note — There may be a lot of script editor activity on the example stuff as I have DEBUG on in the module currently.

Long version

Let’s say we wanna store an object ‘null1’ to call and we’re storing on ‘storageNull’ How might we do that.

string attr – example: storageNull.stringAttr = null1

This works as long as there is only one object named ‘null1’ and as long as ‘null1’ is never renamed. So in short, it works rather poorly.

msgAttr – example storageNull.msgAttr >>connection>> null.msg

This works great and was my preferred method up to this point.

The conundrum on locinator was that I had some locator types that were created from a component say ‘geo.vtx[123]’ for example. My solution back in 2010ish when I wrote it was to just use a string for the whole thing and just hope there wasn’t a name conflict.

So, how might we store this in a persistent manner. Having learned a few things since back in twenty ought ten I said self, we can can better than that now.

The new implementation is as follows:

We take our data to be stored and split out our base node from any component or attribute. Namely we split the first ‘.’ out and validate the bits to know what we have

Store the main node as a standard message connection

Store the extra bits to a json dict via Red9’s json string implementation. We also allow for a a specified dataAttr (our extra data attr) and dataKey (for the dict) for specific storage

So in this case our ‘geo.vtx[123]’ is split to the following:

storageNull.msgAttr >>connection>> geo.msg

sorageNull.dataAttr = {msgAttr/dataKey:vtx[123]}

We do this as a dict and not a simple string attr per stored object because we use lots of these and having two attrs for every stored message seemed overkill. Once I’d worked out the component store, attribute storing was pretty simple. If we wanted to also add ‘geo2.tx’, it would be added as:

storageNull.msgAttr2 >>connection>> geo2.msg

sorageNull.dataAttr = {msgAttr2/dataKey2:vtx[71], msgAttr/dataKey:tx}

The dataKey comes in particular use with our datList/msgList setup which is our solution to multi message attrs being rubbish for maintaining ordered data.

When the get_message call happens it first gets the msgAttr and then checks the default extra dat attr if none is specified. Whenever data is found it gets appended to the return.

Yes, you can do some of this stuff with objectSets or other avenues and sometimes those work great. This
is simply another way of storing data mainly for our rigging purposes.

Been struggling on this one. The problem at hand is one of trying to get transformed blendshape targets baked down from one mesh to another. This path happened to be a dead end but hope it is useful for other purposes.

There are times when it is useful to see the difference in two meshes, or add/subtract the difference between two. In general, mesh math (as we’ll call it).

There are a few new calls:

cgm.core.lib.geo_Utils

meshMath_values — this call does the math portion of mesh math

meshMath

modes

add : (target + source) * multiplier

subtract : (target – source) * multiplier

multiply : (target * source) * multiplier

average: ((target + source) /2 ) * multiplier

difference: delta

addDiff: target + (delta * multiplier)

subtractDiff: target + (delta * multiplier)

blend: pretty much blendshape result if you added as a target using multiplier as weight

So I got a message from a user on Thursday saying that cgmToolbox didn’t work in Maya 2017. Got around to installing 2017 and yup – borked. Spent the evening on Thursday identifying this issue and Friday was fix day.

If you don’t care about what was wrong and just want the bottom line — cgmToolbox should be working in 2017 Maya with the new build I’ll be pushing to the repository shortly.

If you do care…

NOTE – If you use use zooToolbox and specifically zooPy.path.Path (or zoo.Path as I’ll call it), this post would behoove you to look at unless you like stumbling down the same rambling trail others have tread.

Been using zoo stuff for well over 5 years now and Hamish(creator of zooTools) is out of the game last I knew so I decided I had best fix the problem as googling the topic got jack squat and my usual sounding boards hadn’t come across it yet.

Initially I thought Autodesk had gone and changed something and blew up my stuff but at the end of the day it turned out to be the fact that the python that 2017 is running updated the python str class. It just so happens that zoo.Path runs that as a subclass and was overloading some built in calls (find and replace specifically). Anyway, there is a walk generator for path stuff that pushing an instance of the zoo.Path into it rather than a ‘native’ string. Part of that (new to 2017) walker calls on ‘replace’ and so breaks because it needs to replace the path separator which zoo.Path specifically avoids in it’s overload. zoo.Path’s replace is ONLY for replacing tokens between the separators.

Long story short, that raises an error of ‘/’ cannot be indexed because the find call (in zoo.Path) is specifically removing in it’s searching.

Interesting tidbits:

With 2017, os.path.sep is now ‘\\’ up till 2017, it’s been ‘\’ at least all the way back to Maya 2011. On windows at least

Something changed with the os.walk generator to make it not work as it did before 2017. Maybe it used to str(arg) stuff in the process and now just passes through the string. Whatever the reason, it broke.

zooPy.path.Path — If you have old versions of zoo installed and trying to run stuff in 2017. It’s gonna break on you if it hasn’t already. You can use this or do your own patch:)

osPath — call to return a os.path.sep joined version of the path. Path natively works with ‘/’ and the new double ‘//’ messes with stuff

_list_filesystem_items — changed the walk creator to use a osPath string to stop the failing

Cleaned out a bunch of stuff from __init__ files. — I’d had some built in calls for listing files and getting other info back before I knew the right way to do it or at least a better one.

cgmToolbox

clean_scriptPaths/clean_pluginPaths — The call that was breaking stuff were my path setup stuff. As such, the env for these guys got a little borked during the troubleshooting. This was a quick attempt at fixing stuff. As an experiment, this may or may not be reworked.

Check all paths for valid paths (will add to the env without failing)

Removed a bunch of .git stuff that some other scripts I’d used from someone else apparently added.

Acts as a report for what’s there if you didn’t know as it reports all good ones

core.cgmPy.os_Utils

get_lsFromPath — reworked from the __init__ cleanup. Accepts file paths now and just gets the director they’re in for searching

Now I can get back to cgmBlendshape for Morphy 2. Wrote some fun mesh math stuff toward that end earlier in the week as well but that’s a post for another day…:)

We’re pleased to announce our first on demand class with Rigging Dojo – Intro to Metadata. This is our first class of this type in general and we hope folks find it helpful. Click on the pic above or here….

This class was created with two purposes in mind:

To share some of the many lessons learned over the past several years working with red9’s great code base

To provide a basic foundation of knowledge for those wanting to delve into Morpheus Rig 2’s continued development.

Some might wonder what reason you might want to use red9’s code base or what benefits in particular you might find. The easiest way to give a quick example would be to provide a code example of a typical rigging task but with and without meta. Let’s look at something one does pretty regularly while rigging – do some stuff on a given joint chain.

Note — this exercise was painful to write as I’d forgotten most of the standard calls and ways to do stuff as so much is just built in now…

First, open up maya and make an amazing joint chain. If it’s not amazing, that’s okay – start over and do it again.

To anyone who’s worked with coding blendshape stuff it can be tedious especially when you bring in inbetweens. Thankfully, Autodesk is fixing a lot of that with 2016 extension 2 if you missed that update but there are still folks using older versions and it doesn’t resolve everything. We have to deal with them a good bit on Morpheus 2 and so we wrote a metaclass to deal with them.

Initial features of the cgmBlendshape metaclass that you can’t easily do with normal api or mc/cmd calls:

Most functions work off of index/weight or shape/selection format

Easy alias naming

Replacing shapes — change out shapes in place keeping inbetweens and connections intact

inputPointsTarget — the is the differential data of the point positions being transformed by a given shape target. It is indexed to the inputComponentsTarget array

inputComponentsTarget — these are the compents that are being affected by a given shape

inputGeomTarget — this is the geo affecting a particular target shape

Replacing blendshapes – you can 1) use a copy geo function if the point count is exact to change the shape to what you want or 2) make a function to do it yourself. There’s not a great way to replace a shape except to rebuild that whole index or the node itself. We made a function to do that

Once a blendshape node is created with targets, the individual targets are no longer needed and just take up space. Especially when you have the easy ability to extract shapes.

Getting a base for calculating delta information. As the blendshapes are stored as delta off of the base, the best way I could find to get that delta was to turn off all the deformers on the base object, query that and then return on/connect the envelopes. I’m sure there’s more elegant solutions but I was unsuccessful in finding one.

Once you have that creating a new mesh from a an existing one is as simple as:

Taking base data

For components that are affected on a given index/weight: add the delta to base

duplicating the base and xform(t=vPos, absolute = True) each of the verts will give you a duplicate shape

Here’s some code to play with the first iteration. You’ll need to grab the MorpheusDev branch on bitbucket if you wanna play with it till I push it to the main branch.

"""
------------------------------------------
cgm.core.examples
Author: Josh Burton
email: jjburton@gmail.com
Website : http://www.cgmonks.com
------------------------------------------
Help for learning the basis of cgmMeta.cgmBlendshape
================================================================
"""
from cgm.core import cgm_Meta as cgmMeta
cgm.core._reload()#...this is the core reloader
#==============================================================================================
#>> cgmMeta.cgmBlendshape
#==============================================================================================
import maya.cmds as mc
#You MUST have the demo file to work though this exercise though you could probably glean the gist without it with your own setup
#>>Starting off =========================================================================
bs1 = cgmMeta.cgmBlendShape('pSphere1_bsNode')#...let's initialize our blendshape
bs1._MFN #...here you'll find the api blendshape deformer call should you be inclined to use it
#>>bsShape Functions =========================================================================
#We're referring to the shapes that drive a blendshape nodeds base object here and the functions relating to them
#Doing this first will make the blendshape wide functions make more sense on the queries and what not.
bs1.bsShape_add('base1_add')#...we're gonna add a new shape to our node. Since no index is specified, it just chooses the next available
bs1.bsShape_add('base1_add', 8)#...let's specify an index
#...hmm, our add throws an error because that name is taken. let's fix it
bs1.bsShape_nameWeightAlias('HeyThere',8)#...nice!
bs1.bsShape_add('base1_tween', 0, weight = .5)#...we're gonna add a new inbetween shape by it's geo, index, and weight
#==============================================================================================
#Replace functions...
#...replacing is not something easily done in basic maya calls
bs1.bsShape_replace('base1_replace','base1_target')#...replace with a "from to"" call.
bs1.bsShape_replace('base1_target','base1_replace')#...and back
#...Note - the inbetween is intact as is the driver connection
bs1.bsShape_replace('base1_replace',0)#...indice calls also work for most calls
bs1.bsShape_replace('base1_target',0)
#==============================================================================================
#Indexing...
#An index for use with working with blendshapes needs to have an index and weight in order to know what you're working with
bs1.bsShape_index('base1_target')#...this will return a list of the indices and weights which this target affects in [[index,weight],...] format
bs1.bsShape_index('base1_add')#...this will return a list of the indices and weights which this target affects in [[index,weight],...] format
#==============================================================================================
#Query...
bs1.bsShape_getTargetArgs('base1_target')#...this returns data for a target in the format excpected by mc.blendshape for easier use in nested list format
bs1.is_bsShape('base1_target')#...yup
bs1.is_bsShape('bakeTo')#...nope
#==============================================================================================
#>>Blendshape node wide functions =========================================================================
bs1.get_targetWeightsDict()#...this is a handy call for just getting the data on a blendshape in {index:{weight:{data}}} format
bs1.get_indices()#...get the indices in use on the blendshape from the api in a list format
bs1.bsShapes_get()#...get our blendshape shapes that drive our blendshape
bs1.get_baseObjects()#...get the base shapes of the blendshape or the object(s) the blendshape is driving
bs1.get_weight_attrs()#...get the attributes on the bsNode which drive our indices
bs1.bsShapes_get()#...get our shapes
#==============================================================================================
#>>Arg validation =========================================================================
bs1.bsShape_validateShapeArg()#...no target specified, error
bs1.bsShape_validateShapeArg(0)#...more than one entry, error
bs1.bsShape_validateShapeArg(0, .5)#...there we go
bs1.bsShape_validateShapeArg('base1_target')
#==============================================================================================
#Generating geo...
#Sometimes you wanna extract shapes from a blendShape node. Let's try some of that
bs1.bsShape_createGeoFromIndex(0)#...will create the a new piece of geo matching the 1.0 weight at 1.0
bs1.bsShape_createGeoFromIndex(0,.5)#...will get you the inbetween
bs1.bsShape_createGeoFromIndex(3)#...will get you squat because nothing is there
bs1.bsShape_createGeoFromIndex(0, multiplier = 2.0)#...you can also generate factored targets
bs1.bsShape_createGeoFromIndex(0, multiplier = .5)#...
bs1.bsShapes_delete()#...delete all the targets for your blendshape.
#...ah geeze I didn't mean to do that. No worries!
bs1.bsShapes_restore()#...rebuilds the targets and plugs them back in
#==============================================================================================

So the rabbit trail from over the weekend proved to not be the answer to my original problem as hoped. Namely I was still getting geo movement from far off regions when baking blendshapes to non-similar geo (think a sphere placed on a body).

As such, my next plan to resolve this was to create a specific conforming geo piece to wrap to then wrap my nonconforming object to. To do this, I need a way to find the geo closest to my geo I wanted to transfer the blendshapes too and so wrote a new function for this that would:

Search target geo against a source geo piece to find geo from each target within the source by two methods:

boundingBox.contains – by vert

rayCasting – by the compass vectors to make sure it is completely within the sourceObject

Translate that data to verts,edges, faces

Have a method to expand that data:

selection traversing

softSelection radius

Lessons learned:

bounding box checking is much much faster so use that mode unless you just have to have a more precise idea of what verts are inside the mesh.

Not a lot of specific info I could find on some of these concepts and so wanted to share to save someone else time and deadends

Here’s part one of this issue which is housed at cgm.core.lib.geo_Utils.get_contained. There are too many dependencies to include them all but you can get this gist from the code.

On yet another rabbit trail of problem solving on Morpheus 2.0, I came across an issue where wrap deformers weren’t working as needed. Namely transferring blendshapes from one mesh to another when the shape of the target mesh wasn’t like the original. Even geo changes in regions no where near the ‘to bake’ geo were affecting it.

So did some googling and came across a concept I’d not used before – namely using a mesh to deform another with a skinCluster.

Neat, so how do we do it?

Get your target and source mesh ready

Create a joint and skinCluster your target mesh to it

Add the driving mesh to the skinCluster with the useGeometry flag ( sample code for this line below).

polySmoothness flag. This controls the smoothness of the mesh deformation of the target mesh.

A polySmoothness of 0 is the closest to a standard wrap deformer

In my initial testing I found that this flag could only be set consistently on creation. Editing the flag could push the smoothness up but not down (in 2011 at least).

Make sure the useComponents attribute on the skinCluster is set to True. If you don’t see the deformation doing anything this is the likely culprit.

I wrote a script to set this up but it’s still wip. It’s a function found here: cgm.lib.deformers.influenceWrapObject. Because of the issue noted in step 3.2, I added the polySmoothness as a creation flag.

This method of wrapping is much more localized than wrap deformers when the mesh isn’t close AND provides an easy way to paint weights for the deformation.

Released a build of Morpheus 2 this week and immediately ran into some issues with the marking menu and hot keys. I’d been using zooToolbox’s setup for years for hot keys but it didn’t work with 2016 so I dug in.

Maya 2016 has a pretty neat new editor but it’s still probably more steps than most of our users could reliably follow so wanted to get the button push setup back.

There a few things to remember when working with hot keys and in this order…

runTimeCommand– This is the code that gets run. It can be python or mel

nameCommand — This is required for a hot key to be setup properly

hotkeySet — This is something new with 2016 and needs to be set to a new set to be able to add a new hot key because the default set is unchangable

savePrefs — after setting up your hotkey, you must save the prefs or the newly created hotkeys go away (not sure if this is new to 2016 or not)

Lessons learned:

hotkeySets — were added in 2016. Any hotkey work you do post 2016 needs to account for them. I ended up having my stuff use the existing set if it wasn’t the default and create a new one if the default is the current one

hotkey -shiftModifier flag — this was added in 2016

Pushing dicts into mc/cmds calls — In general, something like mc.command(**_d) works with _d being your dict. However on mc.hotkey I found that the keyShortcut flag needed to be in the dict and at the start of the call to work: mc.hotkey(_k, **_d).

I ended up writing a handler and gui to set stuff up. I’ll swing back and talk about it another time if there’s interest.

As I’ve been closing in on finishing Morpheus 2 I found myself in need of a distributable skin data system to be able to apply skinning information to Morphy meshes after they’d been customized and no longer matched up with the base mesh. Not being able to find a good way of doing it in natively to Maya and not finding any open source options, writing our own was the only way forward.

Thanks to Alex Widener and Chad Vernon for some tech help along the way.

Before delving in here, here’s some lessons learned along the way.

mc.setAttr — found this to be a unreliable method of setting weights via the ‘head_geo_skinNode.weightList[0].weights[0]’ call convention. Just didn’t seem to set properly via any call but the api.

mc.skinPercent — call is hopelessly slow and should never be used for intensive work. A query loop went from 78 seconds to run to 1.3 simply using an api weights data call even with having to re-parse the weights data to a usable format.

weights — speaking of, this was an obtuse concept to me. This is regards to the doubleArray list used with an MFnSkinCluster. In short the easist way to get to a spot in this data set is as follows:

weights.set(value, vertIdx*numInfluences+jointIdx)

weights — doubleArray list instance

value is the given value you want

vertex index * the number of incluences + the joint index = the index in the array

Normalizing skin data — You usually want your skin values to add up to 1.0, so here’s a chunk to help

L = [.2, .5]#...list of values
normalizeTo = 1.0#...value to normalize the sum to
[float(i)/normalizeTo for i in [float(i2)/sum(L) for i2 in L]]
#...thanks to http://stackoverflow.com/questions/26785354/normalizing-a-list-of-numbers-in-python

The initial list or requirements for the functions were as follows:

Readable data format — decided on configobj having used it with some red9 stuff and finding it easy to use.

Export/import data sets

Work completely from the data file for reference (no source skin necessary)

Work with different vertex counts if similar shape

Used indexed data sets for easy remapping of influences

With that being said. Here’s the demo file link and you’ll need the latest cgm package to follow along. Open up a python tab in the script editor and try these things one line at a time.

import cgm.core.lib.skinDat as SKIN
reload(SKIN)
import maya.cmds as mc
SKIN.data()#...nothin
#select pSphere1 then try again
d1 = SKIN.data()#...now we have a validated source and throw it to a variable
d1.report()#...this will show some of what we have stored. No config file yet so there's not a ton there.
#...let's write our skin data to a file
d1.write()#...this will 1) gather the skinning data and 2) write it to a file where you specify
d1.report()#...bit more info now...
"""
As you can see if you peruse we're storing a lot of extra data.
The idea here is to store enough that we can do some much neater
stuff down the line.
"""
d1.validateTargetMesh('pSphere2')#...let's add a target mesh to our data object
"""
Before we apply our data let's talk about a couple of modes we have available
:target - Uses existing target objects skin cluster influences
:source - Uses source mesh's skin cluster influences
:config - Uses config files joint names
:list - Uses a list of joints (must match config data set len and be indexed how you want it mapped)
In this case, we have no skinCluster on our object yet so source is
"""
d1.applySkin(influenceMode = 'source')#...no go as we don't have a source yet
reload(SKIN)
#...what about for different vert counts
d1.validateTargetMesh('pSphere_moreverts')#...let's add a new target mesh
d1.applySkin(influenceMode = 'source')#...no go as we don't have a source yet
"""
Say we wanna map our data to new joints...
"""
mc.delete('pSphere2_skinCluster')#...cause I don't have influcne changing for existing clusters working yet
newJointList = [u'joint4', u'joint4|joint2', u'joint4|joint2|joint3']
d1.validateTargetMesh('pSphere2')#...let's add a target mesh to our data object
d1.applySkin(influenceMode = 'list')#...oops
d1.applySkin(influenceMode = 'list', jointList = newJointList)#...there we go
#...other calls
d1.read()#...read a file
d1.updateSourceSkinData()#...this is a call to update the source data with new weighting should you change it
SKIN.gather_skinning_dict('pSphere1')#...the data gatherer