::skinned geo in modo::

February 27th, 2010 by hamish download the zooToolBox

I’m not really a modo user – I’ve had to write a few scripts for it, but generally I’m considered a modo n00b.  But its quite a popular modeling app at work, which is hardly surprising given how much better it looks/sounds than maya for modeling.  So anyway, it occurred to me that I could use a modified version of the skin weight saving tool that I wrote for maya, as a way of creating skinning information for geometry out of modo.

How?

Well, we have a compilation step between export and visualization in the game engine.  This isn’t necessary, but it provides a convenient way of doing rigid error checking, and expensive optimization up front – and as of late we’ve been using it as a place to do additional assembly on the exported data.

When data gets exported out of the authoring app we try to preserve as much as the scene as possible – which is useful for a variety of reasons which I won’t go into here.  But for runtime efficiency, you generally want to do the opposite – rigorously remove anything that isn’t essential.  Some studios do this in their exporter, some studios do this at load time in the engine – we do it as a post export step before loading it in engine, which as I mentioned, provides a convenient place to do additional assembly on data that comes from multiple sources.

Getting back to the original point of the post – skinning geometry in modo.  Well, technically the skinning happens outside of modo, but all the data is authored inside modo, so it IS only a technicality.

First up – the skinning tool in maya is a point cloud loading and saving tool.  So basically you give it some skinned geometry, and the tool writes out position vectors with a list of joint names and joint weights – and thats it.  So if we can get that information authored in modo, then we can derive skinning data.

In modo what an artist would author is a hierarchy of transforms (I’m not sure if they’re called that in modo – modo isn’t that strong on clear terminology from what I’ve seen of it), and then parent a bunch of primitive geometry to those transforms.  The empty transforms get named in a special way (like bone_pelvis, or joint_arm_L etc) so the tool that derives the skinning data knows what to interpret as the skeleton, and then any geometry parented under that transform is assumed to be rigidly skinned to that joint.

So this is enough data to generate a point cloud that can be used to generate skinning information.  The skin weight tool then looks through the point cloud data, and does a radial search based on a couple of user settings for each vertex in the actual geometry to be skinned.  Generally what the search does is expand a starting radius until its found 2 or more verts.  Then it finds the closest verts *(see below for details on what this means), averages their weighting contribution and applies the skinning data.

So its kinda like sculpting volumes – and in fact, its a really fast and easy way of generating skinning information.

* The tool, to find the “closest verts” works kinda like this:  Given a vert on the actual geometry, it starts off with a radius of x.  The radius is grown each iteration until multiple verts are found.  From these verts, the closest is found.  The distance to this closest vert is stored, and the distances to all the other verts is then compared to a ratio.  Any vert that falls within the <closest distance>*<ratio> range is included in the weight sum (weighted by proximity).  By doing it this way scale isn’t important, and the user is presented with two fairly intuitive values to control the weighting.

Share

This post is public domain

This entry was posted on Saturday, February 27th, 2010 at 11:58 and is filed under main. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

  • http://www.watchmike.ca Mike

    That last part about finding closest verts is super-cool. I could see a process like that working for
    constraining a master armature to a slave armature. Ie figuring out the distance between pivot points of bones, and make a ratio of them to determine which ones get constrained to which.
    That way, naming bones, or tediously constraining them one-by-one is no longer necessary.

    Or maybe I’m completely wrong.

  • hamish

    yeah you *could* do that, although usually if you have two rigs of differing resolution, one is derived from another – so things like naming conventions are usually more reliable – and generally one is derived procedurally from the other anyway yeah?

    but you’re right, you could use this idea.

  • http://www.watchmike.ca Mike

    Oh – hehe, shows what I know about professional rigging!

    Still – thanks for sharing the weighting info. Weighting is an area that’s relatively quiet in the development sense.

  • hamish

    well, it *could* be useful if you were involved in a part of the pipeline that had no control over the initial rigging. like perhaps if you worked at an outsourcing company perhaps?

    but yeah, ideally you’d want a lot more control over how proxies were rigged up. you only really ever want to rely on proximity data when there are no other alternatives.