Style question for maya properties


Here is a bunch of code which is functionally identical:

import maya.cmds as cmds
from maya.api.OpenMaya import MVector

# functional access 

def get_translation(obj):
    return MVector(*cmds.getAttr("{}.t".format(obj)))        
def set_translation(obj, v_as_tuple):
    cmds.setAttr("{}.t".format(obj), *v_as_tuple)

# one function for both get and set

print get_translation('pSphere1')
set_translation('pSphere1', (1,2,3))

def translation(obj, **kwargs):
    set_value = kwargs.get('set', None)
    if set_value is not None:
        return set_translation(obj, set_value)
    return get_translation(obj)

# function with kwargs to get or set
print translation('pSphere1')
translation('pSphere1', set = (1,1,1))
# accessor object  -----------------
class Translation(object):
    def __init__(self, obj): = obj
    def value(self):
        return get_translation(
    def value (self, v):
        set_translation(, v)
t = Translation('pSphere1')
print t.value
t.value = (5,4,3)

# pymel-like property access -------------
class TranslationProperty(object):
    def __init__(self):
    def __get__(self, owner, _):
        return get_translation(owner)
    def __set__ (self, owner, v):
        set_translation(owner, v)    
class Proxy(str):
    translation = TranslationProperty()
p = Proxy('pSphere1')
print p.translation
p.translation = (3,2,1)

All does the same thing. Which one do people like, and why? This is toy code, in production I’d use cmds.xform to set translation in world or object space and there are other things to do, I’m just interested to here how people feel about the difference between functions, accessors, dot-style access, or possibly other kinds of syntax.



I’m partial to the pymel-like one, but I’m a fan of descriptors.

Also seems the easier to extend for rotation / scale adding world space versions etc…

Second place would be the way pymel handles with a custom attribute class that you interface with, or flat functions.


It’s easy enough to make a factory out of the properties to support stuff, the big question there is whether you want to do it ‘on demand’ – which makes it less deterministic – or up front like mGui does, which means zillions of templates for different node types.

I’m also still searching my feelings ™ to see how i feel about a version where getting and setting are stateful – which is the pymel way – vs stateless and functional. In the end it all goes to the same place, after all…


The pymel attributes are actually pretty stateless. vars(node.t) shows that they keep a reference to the node, and the MPlug, and potentially the attribute name.

Unless you’re meaning something else by stateless.

PyNodes are expensive to construct and initialize, but they don’t hold very much data internally. Instead they just proxy the underlying maya objects. The expense is more from the breadth of items that a PyNode could be, and the cost of playing 20 questions with maya to determine that type.


Yeah, that’s the rub ; you’ve got to choose between

a) just explicitly set the name and type of all attributes (at least, all you want to support) on all node types
b) try to derive them and cache them as needed
c) resolve them at runtime every time.

All have drawbacks.

What I meant by ‘stateful’ is only that in the class and attribute examples above, the objects are long-lived and ‘setting’ them is going to change the original object. In some other universe you keep it more like the first example (and like Maya) where you have to explicitly opt in to assignments that change the maya scene


Another fan of pymel-like one.
For me, the convenience and scalability (with reduced impact on refactoring) is the biggest win and worth the extra time spend initialising.
If performance is a potential issue then theres probably a few design choices than can be made to reduce the construction cost.
Typically though, convenience wins out.


I’ve thought about doing something like this in the past, potentially leaner wrapper without all the bells and whistles. But its never really made sense, probably because I’ve not had a project that really needs a solution of that scale.

Hmm. Not sure if I’m entirely getting the difference.
node.translate.set((5,5,5)) vs set_translation('node', (5,5,5)) feels pretty comparable, just one using objects and the other using functions, both apply a side effect to the node, under the hood both boil down to cmds.setAttr('node.translate', *(5,5,5))


Yeah, they are all the same under the hood – at some point you ‘commit’ the change and it’s reflected in the scene. There’s almost an analogy to databases: you get the values, so some stuff and then slam them back in, changing the state of the database.

The design issue is… when do you want to do that? pymel and the version above both ‘commit’ on every change. In database land you are supposed to do bulk updates, making big atomic changes in one go.

Is that an interesting direction in maya-land?


Oh and then there’s the eternal maya question of handling renames / reparenting. But that’s solvable.


Ah okay I get what you’re saying now.


So, I suppose if you use the lightweight proxy model you could just split the difference : for common types (transforms, meshes, lights) you have prebuilt options with known properties, and when you hit a property you don’t recognize you build an accessor on the fly and cache it.

For most uses it’d be fine, but it would be murder large numbers. But you don’t want to do bulk operations OO-style in any paradigm, they’re always slower than the imperative alternatives


Personally, flat functions unless there is a current need.

Do you have a requirement where you need to commit data in one go? I personally have never come across such a situation.

In most cases I’ve had to do is collect data, process, and write to scene all in a blocking call. All of which can be done with either method, but will probably be faster with flat functions.


@Klaudikus: The best use case would be to do bulk edits, particularly with xform which can used on a bunch of objects at once. But it’s not a natural idiom in Maya, so maybe not worth pursuing.

Followup question:

     connect ('pCube1.tx',  'pSphere1.ty')


     pCube1._tx .connect(pSphere1.plugs.ty)

     # previous two could also be written with operator overloads:

    pCube1.plugs.tx >> pSphere1.plugs.tx    # roughtly what pymel does

    pCube1._tx >> pSphere1._tx

Property access is, programming wise, really easy. Being able to contextually switch between pCube1.tx getting/setting vaules and being a target for connections is more complex


Here’s one more option, dictionary-like access.

pCube1["tx"] = 5.0
pCube1["tx"] >> pCube1["ty"]

As for database-like bulk writes, have a look at the MDagModifier().

from maya.api import OpenMaya as om

mod = om.MDagModifier()
transform = mod.createNode("transform")
shape = mod.createNode("mesh")
mod.reparentNode(shape, transform)

fn = om.MFnDagNode(transform)
mod.newPlugValueFloat(fn.findPlug("translateX", False), 5.0)

# Nothing written, until..

One usecase is performing doIt only when all is in order.

For example, during an auto-rigging process there may be a number of individual modules built with little to no knowledge of each other. At some point, one of the modules have a problem and fail. Rather than having half-built the rig, the exception thrown can simply halt the process before doIt is called.

Filling a modifier with future operations is quite a bit faster than actually writing to Maya, which means you could have it filled interactively on every change to an input template/guide/proxy/etc. without actually affecting the scene, just to see whether all is in order. Like linting in a source code editor.


@marcuso That’s definitely the sort of scenario I had in mind. bulk updates within a single undoChunk will behave almost like a dagModifier doIt() too, which is definitely a thing to consider.

Of course, the question is also whether that’s a common enough case to try to support as syntax, or you just write a class to handle bulk settting instead…


Part of the reason I’m looking at all of this stuff is trying to see a way to start limiting bugs from things like

cmds.getAttr (object_name + " .tx")

which is syntactically correct but will never return a valid result; it’s the kind of thing people stare at for 10 minutes before the notice the extra space…


This is one of the best reasons for a transactional style system.

I have done this so many times. Which is one of the reasons why I default to using the pynode attribute access syntax.


Somewhat, only it’s a little different.

Firstly, although the modifier itself has an undoIt() method, it isn’t actually undoable by the user. That is, undoIt() isn’t added to the undo queue alongside commands made via cmds or PyMEL; which can be somewhat confusing at first. You actually have to call this yourself, or add it to he undo queue by yourself - an example of how to do this can be seen in apiundo.

Secondly, commands within an undoChunk are also all still executed as they are called, whereas no command with a modifier is executed until doIt() is called. The difference is subtle and mostly visible when something goes wrong, but it means that if you had e.g. 100 commands involving changes to the scenegraph, where the 99th command fails, then - from the user’s perspective - the time it’d take for the action to ultimately fail would be the time taken to execute each of those 99 commands, let’s say 5 seconds. Whereas the time it takes for the command to fail with a modifier is the time taken to effectively store each command in the queue that has yet to be executed, such as 0.05 s. That, and of course the additional time taken to then undo the half-baked scene they end up with.

In the case of success however, then none of this applies; apart from the overall speed-up of using a modifier in place of cmds and PyMEL.


that apiundo is super useful, one of my guys what butting heads with that just last week


Oh yeah. That looks supremely useful.