::blender as an exchange format?::

July 1st, 2011 by hamish download the zooToolBox

Procedural-ism in a pipeline can be super useful. Of course, it depends on the project. Its more useful in projects where there are a small number of expensive, multi-faceted assets. High fidelity characters for example; with powerful facial rigs, all encompassing body deformation rigs, clothing and body part simulation etc… In almost every case, many different hands will touch this single asset and being able to express the operations to perform on the different streams of data is invaluable.

Lets take a clothed hero character as an example. Breaking implementation of this character asset down into areas of expertise we’ll most likely have a character modeler, a facial modeler/rigger, a clothing sim guy and then the body articulation guy who may also be the guy who builds the puppet rig on top of the deformation rig.

Now obviously this could all be handled by one guy, but on most big productions it isn’t. After all, being a fantastic facial modeler/rigger is hard. Really hard. And it takes many years of work, practice and study to do really well. Similarly for simulation, modeling, deformation rigging and puppet rigging. There are very few people in the world who are fantastic at all of these. And even if you did give the task to one guy, it takes a long time to do all this work. Doing the tasks in a serial fashion is usually unacceptable. So you want to be able to split the work up, assign different people to get the job done and charge forth.

Making it possible to have many people work on a single asset however is kinda tricky. How do you funnel all that data together? Ideally you want to be able to build some sort of recipe to take a bunch of different pieces and splice them together. But this splicing process needs to be easy to use and ideally transparent.

As an example lets take body articulation. Lets build a proxy model – as close in proportions to what we think the final model will be. Ideally this proxy model will be some early version of the model that everyone is happy with. The articulation guy can then setup skinning and deformation rigging on this proxy geometry. Once we start getting revisions of the final geometry we can take this proxy geometry and transfer skeletons and weights onto the final geometry and spit out a rigged version of the actual geometry. This way the modeler and the rigger can iterate in parallel without to stepping on each other’s toes.

Lets take the face as another example. In a similar way we can have the facial guy start doing the facial rig on proxy geometry. As newer revisions of the geometry come in the facial rig can be transferred to the new geometry, tweaked as necessary and then spliced together with the rest of the body geometry. This finalized geometry can then be combined with the articulation pass we talked about above and sent down the pipe for puppet rigging, or simulation or whatever other requirements you have.

The transfer of data between the proxy geometry and the finalized geometry can be done in a variety of different ways. And which way you choose depends on the specifics of what you’re after. Using UV space to transfer data is probably one of the most reliable methods of transfer, but you can also use closest point.

So how does this recipe get run? Well there are a variety of ways of doing this, but this is what I did. I wrote a file format to describe the recipe. It basically was a format that recorded a bunch of operations such as LoadGeometry, MergeGeometry, CopySkinningAndSkeletonFromGeometry, ReplaceMaterial etc… These operations required various arguments which were also recorded. There was an editor which would allow you to setup these operations which were then saved to disk. Then in maya I wrote a file translator plugin which would run these operations and import the resulting data. This allowed us to reference these recipes into maya. So for example, just say an artist wants to see the latest version of a character’s skinned geometry and facial rig. They would simply reference in the appropriate recipe. Then the file translator would run the latest operations on the latest data and dump the result into the maya scene. So each time the scene is loaded the latest data is being operated on. So artists were always seeing the latest data without having to remember to publish their files or anything like that. The process was transparent.

Obviously people still need to communicate closely. The situation is akin to multi-processing. Some problems can easily be spread across multiple processors while others are not. And even problems that can be split up don’t always get faster when you throw more processors at the problem. Communication between people can be a bottleneck. There is definitely a sweet spot, it just depends on the specifics of the asset being produced.

The system I worked on used a proprietary data format which worked out ok, but we ended up implementing a lot of basic functionality.  Wrap deformation, soft selection, skinning, merging of vertices etc… It would have been a lot more powerful had I been able to use blender for the data manipulation because it has all of this functionality already.

Anyway… Maybe once blender is available as a standalone python module we can get all major 3d apps to implement blender exporters and a general system like this can be implemented. Blender as a 3d exchange format would be awesome. WAY more awesome than collada.

Share

This post is public domain

This entry was posted on Friday, July 1st, 2011 at 10:23 and is filed under main. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

  • http://www.facebook.com/tom.kleinenberg Tom Kleinenberg

    I really dig this idea. Even for roles that aren’t particularly technical but need something a little more advanced than .obj or Collada. A friend at work is a heavy Blender Evangelist and so a lot of our stuff gets routed that way anyway, it would just be nice if the transitions were a bit more streamlined.

  • http://twitter.com/mikebelanger mikebelanger

    Hmm the process you describe sounds a lot like a ‘weak referencing’ technique I heard some time ago.  Considering blender has issues with its own referencing ( aka link system is weird ), your proposed method sounds promising for both blender, and mixed pipelines alike.