::capturing nodes created by function::

July 11th, 2011 by hamish download the zooToolBox

I’ve written a few tools where being able to capture the nodes created by a particular function has come in handy. So I figured I’d blog about it in the hopes that it makes someone else’s life easier.

So do what exactly? Well, the idea is to run some function, and be able to capture all the nodes created while that function executes. I’ll talk a bit about my motivations for this at the end of the post – for now, lets dig into the code.

Maya provides quite a few event based callbacks that you can hook up using the C++ API. This hasn’t been easy to do before python came along without writing C++ code. This particular problem you could mostly solve without this API access, but not any way that would be performant.

from maya.OpenMaya import MObjectHandle, MDGMessage, MMessage
import apiExtensions  #this module makes the maya api bindings a little easier to work with

def getNodesCreatedBy( function, *args, **kwargs ):
        '''
        returns a 2-tuple containing all the nodes created by the passed function, and
        the return value of said function
        '''

        #construct the node created callback
        newNodeHandles = []
        def newNodeCB( newNode, data ):
                newNodeHandles.append( MObjectHandle( newNode ) )

        def remNodeCB( remNode, data ):
                remNodeHandle = MObjectHandle( remNode )
                if remNodeHandle in newNodeHandles:
                        newNodeHandles.remove( remNodeHandle )

        newNodeCBMsgId = MDGMessage.addNodeAddedCallback( newNodeCB )
        remNodeCBMsgId = MDGMessage.addNodeRemovedCallback( remNodeCB )

        ret = function( *args, **kwargs )
        MMessage.removeCallback( newNodeCBMsgId )
        MMessage.removeCallback( remNodeCBMsgId )

        #NOTE: the newNodes is a list of MObjects.  If you want node names do something like this:
        #newNodes = [ str( h.object() ) for h in newNodeHandles ]
        #this solution relies on using the apiExtensions module imported above
        newNodes = [ h.object() for h in newNodeHandles ]

        return newNodes, ret

So thats the code that does the magic. Its pretty simple, even if a bit awkward to use. Basically you pass it the function you want to run and all its corresponding args, and it will return the list of nodes created by that function, and the return value of that function.

Just a quick note – as it says in the comments if you don’t want MObjects returned you can easily return node names instead. Although if you’re using the apiExtensions module you can use MObject instances as if they were node names. You can grab the apiExtensions module from here, or just grab it out of the latest release.  Or if you’re a pymel kinda guy/gal, I expect you can instantiate the PyNode class with an MObject.

Moving on, here is a quick example:

from maya.cmds import *
def someFunc():
        spaceLocator()
        group( em=True )
        joint()
        polySphere()

        return 'some interesting return value'

nodesCreated, returnValue = getNodesCreatedBy( someFunc )
print nodesCreated, returnValue

Thats it. Doesn’t matter how the nodes are created, this function will capture them all.  You may also notice that the shape nodes for the locator have been included as well.  The nodesCreated list will quite literally contain ALL nodes created by the function passed in.

So whats it used for?

Just say I want to write a tool that will build a dynamics network on a given control chain.  I want to be able to turn this functionality on and off easily and I want to make sure when it gets deleted there is no cruft left behind.  First I write the code to build the network.  Then I write another function that will execute this code and capture the created nodes and put them in an object set or a container or something.

Skeleton Builder works in this way.  When a rig part is built the build method gets run inside this node capturing decorator and all created nodes are put into an object set.  So Skeleton Builder can easily delete a rig cleanly.  It can also more easily understand the relationship between a given node and a given part of the rig.

Another tool that uses this is the Dynamics Tool.  This tool will create a hair follicle setup and constrain a given list of objects to it.  It will then give you an interface to delete and create this setup on the fly, or bake the results down to keys.  The deletion of the hair setup is clean because ALL nodes are captured when the rig is created.

One minor thing to note – if you’re running maya 2009 there is a bug with removing node creation callbacks…  This bug is fairly harmless although it may result in noticeable slowdowns if you end up with lots of improperly removed callbacks.

This post is public domain

::blender as an exchange format?::

July 1st, 2011 by hamish download the zooToolBox

Procedural-ism in a pipeline can be super useful. Of course, it depends on the project. Its more useful in projects where there are a small number of expensive, multi-faceted assets. High fidelity characters for example; with powerful facial rigs, all encompassing body deformation rigs, clothing and body part simulation etc… In almost every case, many different hands will touch this single asset and being able to express the operations to perform on the different streams of data is invaluable.

Lets take a clothed hero character as an example. Breaking implementation of this character asset down into areas of expertise we’ll most likely have a character modeler, a facial modeler/rigger, a clothing sim guy and then the body articulation guy who may also be the guy who builds the puppet rig on top of the deformation rig.

Now obviously this could all be handled by one guy, but on most big productions it isn’t. After all, being a fantastic facial modeler/rigger is hard. Really hard. And it takes many years of work, practice and study to do really well. Similarly for simulation, modeling, deformation rigging and puppet rigging. There are very few people in the world who are fantastic at all of these. And even if you did give the task to one guy, it takes a long time to do all this work. Doing the tasks in a serial fashion is usually unacceptable. So you want to be able to split the work up, assign different people to get the job done and charge forth.

Making it possible to have many people work on a single asset however is kinda tricky. How do you funnel all that data together? Ideally you want to be able to build some sort of recipe to take a bunch of different pieces and splice them together. But this splicing process needs to be easy to use and ideally transparent.

As an example lets take body articulation. Lets build a proxy model – as close in proportions to what we think the final model will be. Ideally this proxy model will be some early version of the model that everyone is happy with. The articulation guy can then setup skinning and deformation rigging on this proxy geometry. Once we start getting revisions of the final geometry we can take this proxy geometry and transfer skeletons and weights onto the final geometry and spit out a rigged version of the actual geometry. This way the modeler and the rigger can iterate in parallel without to stepping on each other’s toes.

Lets take the face as another example. In a similar way we can have the facial guy start doing the facial rig on proxy geometry. As newer revisions of the geometry come in the facial rig can be transferred to the new geometry, tweaked as necessary and then spliced together with the rest of the body geometry. This finalized geometry can then be combined with the articulation pass we talked about above and sent down the pipe for puppet rigging, or simulation or whatever other requirements you have.

The transfer of data between the proxy geometry and the finalized geometry can be done in a variety of different ways. And which way you choose depends on the specifics of what you’re after. Using UV space to transfer data is probably one of the most reliable methods of transfer, but you can also use closest point.

So how does this recipe get run? Well there are a variety of ways of doing this, but this is what I did. I wrote a file format to describe the recipe. It basically was a format that recorded a bunch of operations such as LoadGeometry, MergeGeometry, CopySkinningAndSkeletonFromGeometry, ReplaceMaterial etc… These operations required various arguments which were also recorded. There was an editor which would allow you to setup these operations which were then saved to disk. Then in maya I wrote a file translator plugin which would run these operations and import the resulting data. This allowed us to reference these recipes into maya. So for example, just say an artist wants to see the latest version of a character’s skinned geometry and facial rig. They would simply reference in the appropriate recipe. Then the file translator would run the latest operations on the latest data and dump the result into the maya scene. So each time the scene is loaded the latest data is being operated on. So artists were always seeing the latest data without having to remember to publish their files or anything like that. The process was transparent.

Obviously people still need to communicate closely. The situation is akin to multi-processing. Some problems can easily be spread across multiple processors while others are not. And even problems that can be split up don’t always get faster when you throw more processors at the problem. Communication between people can be a bottleneck. There is definitely a sweet spot, it just depends on the specifics of the asset being produced.

The system I worked on used a proprietary data format which worked out ok, but we ended up implementing a lot of basic functionality.  Wrap deformation, soft selection, skinning, merging of vertices etc… It would have been a lot more powerful had I been able to use blender for the data manipulation because it has all of this functionality already.

Anyway… Maybe once blender is available as a standalone python module we can get all major 3d apps to implement blender exporters and a general system like this can be implemented. Blender as a 3d exchange format would be awesome. WAY more awesome than collada.

This post is public domain

::blender would make a GREAT python module::

June 4th, 2011 by hamish download the zooToolBox

Y’know what would be AWESOME?! Blender as a standalone python module. If I was able to do this:

import bpy
scene = bpy.import( "c:/somefile.blend" )
for obj in scene.objs:
    ....
scene.save()

Imagine how cool that would be? You’d have the worlds most powerful, scriptable 3d/2d geometry library. It’d be awesome.

Incidentally, Houdini does exactly this as far as I understand. I don’t know much about houdini (apart from how unbelievably cool it looks) but I watched a video on this where they imported the houdini module from within maya, and then did a whole bunch of cool stuff and blatted it into maya. You could basically leverage a huge amount of houdini functionality in any app that runs python. It also means you can do a whole lot of stuff without the horrid cost of loading the UI.

As cool as it is that houdini can do this, it would be infinitely cooler if blender could do this.

How ’bout it blender devs? I think this feature would make blender amazingly useful to the games/film community. I’ll go into this a bit more in a future post.

This post is public domain

::A more thorough Interface class::

May 19th, 2011 by hamish download the zooToolBox

I did a bit more messing about this evening with the interface class idea. I think the below implementation is fairly complete. It handles multiple inheritance cases where the interface gets satisfied by one of the parent classes.

def interfaceTypeFactory( metaclassSuper=type ):
	'''
	returns an "Interface" metaclass.  Interface classes work as you'd expect.  Every method implemented
	on the interface class must be implemented on subclasses otherwise a TypeError will be raised at
	class creation time.

	usage:
		class IFoo( metaclass=interfaceTypeFactory() ):
			def bar( self ): pass

		subclasses must implement the bar method

	NOTE: the metaclass that is returned inherits from the metaclassSuper arg, which defaults to type.  So
	if you want to mix together metaclasses, you can inherit from a subclass of type.  For example:
		class IFoo( metaclass=interfaceTypeFactory( trackableTypeFactory() ) ):
			def bar( self ): pass

		class Foo(IFoo):
			def bar( self ): return None

		print( IFoo.GetSubclasses() )
	'''
	class _AbstractType(metaclassSuper):
		_METHODS_TO_IMPLEMENT = None
		_INTERFACE_CLASS = None

		def _(): pass
		_FUNC_TYPE = type( _ )

		def __new__( cls, name, bases, attrs ):
			newCls = metaclassSuper.__new__( cls, name, bases, attrs )

			#if this hasn't been defined, then cls must be the interface class
			if cls._METHODS_TO_IMPLEMENT is None:
				cls._METHODS_TO_IMPLEMENT = methodsToImplement = []
				cls._INTERFACE_CLASS = newCls
				for name, obj in attrs.items():
					if type( obj ) is cls._FUNC_TYPE:
						methodsToImplement.append( name )

			#otherwise it is a subclass that should be implementing the interface
			else:
				if cls._INTERFACE_CLASS in bases:
					for methodName in cls._METHODS_TO_IMPLEMENT:

						#if the newCls' methodName attribute is the same method as the interface
						#method, then the method hasn't been implemented.  Its done this way because
						#the newCls may be inheriting from multiple classes, one of which satisfies
						#the interface - so we can't just look up the methodName in the attrs dict
						if getattr( newCls, methodName, None ).im_func is getattr( cls._INTERFACE_CLASS, methodName ).im_func:
							raise TypeError( "The class %s doesn't implement the required method %s!" % (name, methodName) )

			return newCls

	return _AbstractType

class ITest( metaclass=interfaceTypeFactory() ):
	def something( self ): pass
	def otherthing( self ): pass

class Test_implementsAll(ITest):
	def something( self ): pass
	def otherthing( self ): pass

class Test_subclassImplementsAll(Test_implementsAll):
	pass

class SimilarInterface(object):
	def something( self ): pass
	def otherthing( self ): pass

class MultipleInheritanceTest(SimilarInterface, ITest):
	pass

#will throw TypeError
class Test_implementsSome(ITest):
	def something( self ): pass

#will throw TypeError
class Test_implementsNone(ITest):
	pass

There are some simple tests at the bottom that demonstrate the idea.

As you can see, the class MultipleInheritanceTest has no methods, but its parent class SimilarInterface does implement the required methods, so the class passes without issue.

Anyway – I figured after the previous post I should at least finish the thought. The implementation in the last post wasn’t complete.

This post is public domain

::Interface classes in python::

May 18th, 2011 by hamish download the zooToolBox

Python doesn’t really provide a defined way to implement interfaces. Python 2.6 provides abstract classes – although what I don’t like about this implementation is that you don’t get an exception until you try to instantiate the broken subclass, by which time it might be too late. At least, thats what it looks like from what I’ve read (for the most part I’m still stuck using python 2.5 – as I mainly work in maya 2009 so I haven’t bothered messing with any of this yet). I want a solution that happens at parse time – the equivalent to compile type for statically typed languages.

So I toyed around a bit this evening and came up with what you see below. It seems to work – at least for this trivial case. I’m not sure this is necessarily a good implementation, but it goes to show that its reasonably easy to do.

def interfaceFactory( metaclassSuper=type ):
	class AbstractClass(metaclassSuper):
		_METHODS_TO_IMPLEMENT = None

		def __new__( cls, name, bases, attrs ):
			subCls = metaclassSuper.__new__( cls, name, bases, attrs )

			#if this hasn't been defined, then cls must be the interface class
			if cls._METHODS_TO_IMPLEMENT is None:
				def _(): pass
				funcType = type( _ )
				cls._METHODS_TO_IMPLEMENT = methodsToImplement = []
				for name, obj in attrs.items():
					if type( obj ) is funcType:
						methodsToImplement.append( name )

			#otherwise it is a subclass that should be implementing the interface
			else:
				for methodName in cls._METHODS_TO_IMPLEMENT:
					if methodName not in attrs:
						raise TypeError( "The subclass %s doesn't implement the %s attribute!" % (name, methodName) )

			return subCls

	return AbstractClass

class TrackableType(type):
	_SUBCLASSES = []

	def __new__( cls, name, bases, attrs ):
		newCls = type.__new__( cls, name, bases, attrs )
		cls._SUBCLASSES.append( newCls )

		return newCls

class ISomething( metaclass=interfaceFactory( TrackableType ) ):
	def something( self ): pass
	def otherthing( self ): pass

class Something(ISomething):
	def something( self ): pass

If you run this code you’ll see a TypeError is raised complaining that the Something class doesn’t implement the otherthing method as soon as you import the script. Not when you try to instantiate a Something instance.

You’ll notice it only cares about methods being implemented – so if you wanted to force implementation of various properties in inherited classes, you’d probably need a few more conditionals in there – but I generally try to avoid properties. They feel dirty.

Anyway, nothing terribly useful, but I thought others might find it interesting.

This post is public domain

::Python Objects In A TextScrollList::

May 14th, 2011 by hamish download the zooToolBox

One of the more useful classes in baseMelUI is the MelObjectScrollList.  Its simplified UI code enough times for me that I figured it’d be worthwhile pointing it out to others.

The MelObjectScrollList class is subclass of MelTextScrollList – which is a wrapper to the cmds.textScrollList widget command.  The class provides a neat object oriented way to interact with the scroll list.  It hides some of the weirdness of the command (such as indices starting at 1 – WTF?!) and makes it possible to write object oriented code with all the good stuff you expect.

Back to the MelObjectScrollList – so the idea with this class is to use the textScrollList widget to display lists of data of any kind.  Want a list of dictionaries?  No problem.  Want a list of MObjects?  Vectors?  You get the idea.  You can pass any python object you want into the UI and it will display them in the widget, let users select them, and then return you the list of objects that the user selected.  Its very useful.

So how does it work?

Well, the base class contains most of the functionality you need.  There are two general cases.  If you want to display lists of objects that already have a reasonably sensible, user friendly string representation, then you can use the MelObjectScrollList directly:

import maya.cmds as cmds
from baseMelUI import MelObjectScrollList

w = cmds.window( 'test', t='test', h=300, w=300 )
c = cmds.columnLayout()
objScrollList = MelObjectScrollList( c, h=250 )
objScrollList.setItems( [ [], {}, ['some', 'list', 'of', 'strings'], dict, MelObjectScrollList ] )
cmds.showWindow( w )

def onSel( *a ):
	print objScrollList.getSelectedItems()

objScrollList.setChangeCB( onSel )

Incidentally I don’t normally write UI code like this – I just figured doing it this way might make it more available to folks who don’t use baseMelUI for their UI creation. Its also a good demonstration of how the library can be used in a mix and match way (even with pymel!).

So as you can see if you run this code (and have the zooToolbox – or at the very least, the baseMelUI module in your python path) you’ll see a window pop up with the above objects in it. They look just as if you’d called print on each one.

Now if you want to customize the way these objects are displayed in the UI, then you simply need to subclass the MelObjectScrollList and override the itemToStr method. Here is an example:

from filesystem import Path
from baseMelUI import MelVLayout, MelObjectScrollList, BaseMelWindow

class FilenameScrollList(MelObjectScrollList):
	def itemAsStr( self, item ):
		return Path( item )[-1]

class Test(BaseMelWindow):
	WINDOW_NAME = 'test'

	def __init__( self ):
		f = MelVLayout( self )
		self.fileListUI = FilenameScrollList( f )
		self.fileListUI.setItems( Path( "c:/windows" ).files() )
		self.fileListUI.setChangeCB( self.on_itemSelect )
		f.expand = True
		f.layout()
		self.show()
	def on_itemSelect( self, *a ):
		print self.fileListUI.getSelectedItems()

Test()

In this example you should see a window open up with a list of the file names in c:\windows. When you select an entry in the list, you’ll see the full path to that file printed in the script editor. The FilenameScrollList keeps track of the objects you put into it. It acts as a broker between the data and the UI. So using it simplifies code a heap.

Another neat feature thats built into the base class is view filtering. So building on the above code you can do things like this:

ui = Test()
ui.fileListUI.setFilter( '.exe' )

As you can see, once you run the setFilter() method the list now only displays files containing “.exe”. The UI still retains a list of all the files, but it only displays the items that pass the filter.

Lastly the other useful feature the class provides is re-ordering. The methods are “moveSelectedItemsUp()” and “moveSelectedItemsDown()”. You can easily hook up these methods to buttons, popup menus or whatever else you want.

from filesystem import Path
from baseMelUI import BaseMelWindow, MelHSingleStretchLayout, MelVLayout, MelObjectScrollList, MelButton

class FilenameScrollList(MelObjectScrollList):
	ALLOW_MULTI_SELECTION = True

	def itemAsStr( self, item ):
		return Path( item )[-1]

class Test(BaseMelWindow):
	WINDOW_NAME = 'test'

	def __init__( self ):
		f = MelHSingleStretchLayout( self )
		self.fileListUI = FilenameScrollList( f )
		self.fileListUI.setItems( Path( "c:/windows" ).files() )
		self.fileListUI.setChangeCB( self.on_itemSelect )
		f.setStretchWidget( self.fileListUI )

		v = MelVLayout( self )
		MelButton( v, l='up', c=self.on_up )
		MelButton( v, l='down', c=self.on_down )
		v.expand = True
		v.layout()

		f.expand = True
		f.layout()
		self.show()
	def on_itemSelect( self, *a ):
		print self.fileListUI.getSelectedItems()
	def on_up( self, *a ):
		self.fileListUI.moveSelectedItemsUp()
	def on_down( self, *a ):
		self.fileListUI.moveSelectedItemsDown()

Test()

NOTE: make sure to grab the latest baseMelUI.py from here. While writing this post I fixed a bug with the moveSelectedItems methods when a filter is set.

This post is public domain

::simple skin weights tool::

May 10th, 2011 by hamish download the zooToolBox

Referencing. What a pain in the butt it can be. One of the big pains can be when you’re setting up a character. Skinning happens in one file, a rig in another, and animation in a third file. At least, thats how I’ve generally done it. There are variations on this theme, but the basic idea of model and rig in one file being referenced into an animated file is a pretty common paradigm I imagine.

So a lot of the time I find I’ll author my skeleton, skin the geo and build the rig only to find that the skinning doesn’t hold up so well as soon as I strike a certain pose in the animation.

Some people solve this by having a “test suite” of calisthenics they put their characters through before they get animated. And sure, this can work well provided you have some way of describing these calisthenics in a generalized way that will apply to any and all characters that come down the pipe. A spider is going to need a different set of poses to a human, which will be different again for a multi-headed dragon which is different again from a snake like creature with 6 arms.

Anyway, this happens enough that I figured I’d try to alleviate the problem a bit. So I wrote a tool that will take skin weight edits made in the animation file, and push them back to the file the model geometry is stored in.

The process goes something like this.

You’re animating away, and you strike a pose only to realize you’ve got horrible shearing happening somewhere. So in your animation scene, with the pose that breaks the skinning you crack open whatever weight/editing painting tool you’re into and go about fixing the problem area. Now because you’re in an animation scene the skinCluster you’re editing actually lives in a completely different file.

So you run this tool. The tool grabs the skinCluster edits made in the animation scene and shoves em into memory. Then it removes the ref edits from the skinCluster node and saves the animation scene. Then it opens up the file the skinCluster node lives in and applies the fixed weighting it stored from the animation scene and saves the file. Finally it opens the scene you were originally in and voila. Just like that, you’re weights are fixed across the board.

Its not the worlds most useful tool – but it saves the occasional headache, and in fact gets used quite regularly. It makes it easy to edit/fix skin weights “in context” without any convoluted workflow.  And in fact having such a tool means minor skin weight polishing happens more frequently now than before the tool existed because its so easy to do.  Skin fixes can be done in any animation as soon as the problem is seen.

NOTE: maya supposedly has functionality to “Save Reference Edits” but as usual, it seems like a largely useless feature.  Has anyone managed to use it successfully?

This post is public domain

::useful-ifying error messages::

May 4th, 2011 by hamish download the zooToolBox

Printing out warning and error messages from script is the easiest way to communicate things to a user. Maya has generally trained users reasonably well to look for warnings and errors in the script editor. This isn’t really ideal mind you – but thats a topic for another time.

Anyway, as useful as warnings and errors are, if you’re writing modular, re-usable code then writing stuff to the script editor isn’t always as helpful as it should be. Sometimes its hard to know exactly where a message is coming from. Sure, you can grep your code base, but sometimes even this isn’t that easy to do if the message itself is constructed.

In the olden days of MEL I would often try to put the function name at the beginning of the message in the hope of making it easier to track down the reference at a later date – but this wasn’t terribly ideal either. It required me to remember to do it, plus it was just one more reference to update when refactoring.

Python however gives you the tools to solve this problem in a nifty way. You can write a print handler that will figure out where in the code the print/warning/error statement happened and put all that in the message.

For example, I can write code like this:

printWarningStr( "WOW!" )

And the following message gets spewed to the console:

# Warning: WOW!: from line 14 in the function main in the script D:\someScript.py #

As you can see, in the message is encoded the warning prefix (it also turns the script line pink as per usual warning messages in maya) as well as the name of the function it came from, and the location of the script. You could also put the line number in there as well, but I figured I’d omit that for the sake of brevity.

Python is such an awesome language – so deliciously introspective.

So does this get done?

Well, I have two places that I setup this code. The first is in a generic, maya agnostic module so that I can use it in non-maya tools. It looks like this:

def generateTraceableStrFactory( prefix, printFunc=None ):
	'''
	returns 2 functions - the first will generate a traceable message string, while
	the second will print the generated message string.  The second is really a
	convenience function, but is called enough to be worth it

	you can also specify your own print function - if no print function is specified
	then the print builtin is used
	'''
	def generateTraceableStr( *args, **kw ):
		frameInfos = inspect.getouterframes( inspect.currentframe() )

		_nFrame = kw.get( '_nFrame', 1 )

		#frameInfos[0] contains the current frame and associated calling data, while frameInfos[1] is the frame that called this one - which is the frame we want to print data about
		callingFrame, callingScript, callingLine, callingName, _a, _b = frameInfos[_nFrame]
		lineStr = 'from line %s in the function %s in the script %s' % (callingLine, callingName, callingScript)

		return '%s%s: %s' % (prefix, ' '.join( map( str, args ) ), lineStr)

	def printTraceableStr( *args ):
		msg = generateTraceableStr( _nFrame=2, *args )
		if printFunc is None:
			print( msg )
		else:
			printFunc( msg )

	return generateTraceableStr, printTraceableStr

generateInfoStr, printInfoStr = generateTraceableStrFactory( '*** INFO ***: ' )
generateWarningStr, printWarningStr = generateTraceableStrFactory( '*** WARNING ***: ' )
generateErrorStr, printErrorStr = generateTraceableStrFactory( '*** ERROR ***: ' )

As you can see, this is a factory function that returns a function that will generate the message, and one that will actually print the message. You almost always want to use the print one, but if you ever need to generate an info/warning/error message using the same code, the generate function is available.

So the factory function allows you specify a print handler. It defaults to python’s built in print function (or statement if you’re pre 3.0), but this makes it easy to generate maya specific print functions using MGlobal.displayWarning and MGlobal.displayError. In another module I have the following code:

from maya.OpenMaya import MGlobal

generateInfoStr, printInfoStr = generateTraceableStrFactory( '*** INFO ***', MGlobal.displayInfo )
generateWarningStr, printWarningStr = generateTraceableStrFactory( '', MGlobal.displayWarning )
generateErrorStr, printErrorStr = generateTraceableStrFactory( '', MGlobal.displayError )

So in maya tools I simply use the above functions instead of print and voila – I get useful information in the console spew.

Anyway – not a super useful chunk of code, but I thought others might find such a technique useful. Its ~15 lines of code that occasionally saves me a grep of the codebase. Plus its good practice to have logging hooks like this in place.

This post is public domain

::beware!::

April 25th, 2011 by hamish download the zooToolBox

Sometimes I wish I could go back in time and slap some sense into myself. One of the very first things I discovered when I was learning python was operator overloading. I’d not used a language with such power before and stupidly I put this newfound knowledge to use in all sorts of crazy ways.

Today I found an early piece of code I wrote. Its a useful chunk of code – parses config files.

Anyway I wanted to add some functionality to the code. See the config files it parses are often written by hand, and the format supports C-style comments. Currently the parser throws away the comments which is no drama if you just need read support – but a feature request came up that would require the writing of these files from a tool. So I need to preserve comments as best as I can.

Not a big drama to add the functionality except that part of the code I ended up having to touch involved one of the operator overloads – in this case the [] operator (__getitem__ in python). The existing overload was done in a super weird way (what I wanted to slap “past me” for). I made it so you could do this:

f = ConfigFile( 'd:/something.txt' )
f.read()
f[0, 2, -1, 9]

Which would navigate the config file document hierarchy. Why?! Well, probably because I was a n00b and figured it’d be “cool”. SLAP!

Anyway, whats the point here? Well, I guess what I’m trying to say is (apart from don’t write idiotic code) – if you’re going to overload operators be really really careful. Tracking down where the code is used is hard. You can’t really grep your codebase for anything in particular – after all, the whole point of operator overloading is so the interface to the action looks like a standard language operator. In this particular case it wasn’t a big deal – the code is used in only a few places. But if this code had wider use, it would’ve been a nightmare.

This is pretty obvious to anyone who has been coding for awhile – but after this recent reminder I figured a little PSA might save others some pain

This post is public domain

::more on being back::

April 14th, 2011 by hamish download the zooToolBox

After this post, Brad Clark from Rigging Dojo asked me for more info.  Not sure if I can link to a tweet, but the question was this:

just wondering what you are feeling are sticking points, what
area are you having to work hardest on now v. back in the flow

I thought the answer was worth more than a tweet, so here we are.  A quick bit of background to set the stage:  I started out in 3d as an animator.  I morphed into the role of a tech artist because those around me didn’t step up – technical leadership is important, and I guess it suited me.  I’ve discussed it here before (third paragraph) so I won’t bore you again.  But basically thats been my path.  I started out doing lots of animation, and have ended up doing not so much.  In fact, I have done very little animation during the last almost 5 years now.

For now the animation work I’m doing is very much a part time thing.  There are still loads of technical problems and improvements to be made, and honestly, I’m a much better technical artist than I am an animator.  But doing animation is something that I think is really important for me as a technical animator.  Using the tools I’ve written, working through the workflows I’ve helped define, having to deal with all the bugs, shortcomings etc of the work environment I’ve helped create puts me in the shoes of my users and forces me to see and understand the implications of my design decisions.  And I think thats really important.

Anyway – sticking points.  Well, the first would be the calibre of the folks around me.  ;)  Yeah, they’re all kinda good.  I mean, I was never the best animator, but the animators I work with literally are, some of the best.  I guess this isn’t really a sticking point – its more of an opportunity for me to learn and grow.

But the other big thing is the fact that its hard to stop thinking about the technical side of things while I’m animating.  Its been such a part of my mental process for so long that its really hard to turn it off.  And I think this is the biggest thing.  So many parts of the animating process have the potential to be improved, sped up, optimized, made easier etc…  And when I’m animating, all these thoughts are running through my head.  So just pushing those thoughts aside – or at least shelving them for later – is hard.

Like any creative process you do your best work when you’re in “the zone”. And getting into “the zone” is all about focusing all your thoughts on what you’re doing. Whether that be animating, painting, composing music or writing code – if you can push all thoughts aside except for those related to what you’re doing, you can usually get into this amazing mental groove. Its awesome. So I find it very hard to get in “the zone” when I’m animating these days. It doesn’t stop me from doing work, but it does make it harder and less efficient.

Anyway, Brad – and anyone else who was interested, I hope that answers your question!

This post is public domain