Maya mock for tests


Does anyone use, or has created a mock of Maya’s maya.cmds module? E.g.

    from maya import cmds
except ImportError:
    from mock.maya import cmds

# Do test here

I’m looking to have tests run without the need for Maya, such as on travis-ci or any other form of distributed testing. I realise not everything can reliably be mocked, my requirements are:

[li]Create node
[/li][li]Create attribute
[/li][li]List node
[/li][li]List attributes
[/li][li]Connect/Disconnect attributes
[/li][li]Nest node
[/li][li]File open/save
[/li][li]Modify workspace

And though I’m sure more would surface in time, the goal is to make only the most straightforward tests available outside of Maya.


Mocking tests, round 2.0

why not just do your tests with the maya standalone interpreter


Hi passerby, thanks for your reply.

Without context, I would assume it’s safe to rephrase your question as “why use mocks?”, which is quite a broad topic.

But in short, running them via the standalone interpreter requires the standalone interpreter and I’m looking to distribute the running of tests to machines not necessarily equipped with Maya.

If you’re interested in some of the benefits of mocking, have a look at how mocking works with databases; the issues and use-cases are identical to working with Maya.



how would this be a valid test? Maya does some fancy things with returns that you cannot (reasonably) account for, for example:

node = cmds.createNode('locator', name='foo')
assertEqual(node, 'foo')
node2 = cmds.createNode('locator', name='foo')
assertEqual(node, 'foo')

would fail in maya, but not in a mocked enviornment


Thanks TheMaxx for your input.

In a nutshell, the features included in tests would rely on ideal circumstances within Maya; that is, I’m not looking to test whether Maya works or how it works, nor am I looking to test the integration between tools and Maya, but rather whether the behaviour of the tool is as expected and some behaviours are inherently dependent on features provided by the integration, and thus the maya.cmds module.

Having said that, there will of course be tests utilising mayapy and tests running within the GUI as well and I think its safe to say that I’m not looking for a full replacement of the maya.cmds module, just enough to get tests focused on the tool as opposed to its integration to run safely within a coninuous integration environment.

How are you guys running tests on tools that depend on Maya?



We don’t run any automated tests on our Maya Tools. With each release, we go through a smoke test that encompasses the entire pipeline.
The TAs have a special development environment where we can sync each other’s changes ahead of release to do sanity checks.

Individual tools can log success/failure to a database as users use them, otherwise the users will report a bug if a problem slips through.

We’ve discussed unit tests and other strategies, but for our situation its not worth the extra investment.


The goal of this kind of testing is to prove that the code is doing what it’'s been told to do despite upstream changes. That’s not the same thing as guaranteeing that it works! A unit test proves that a given chunk of code, ideally a nice small chunk, is doing expected stuff. Unit tests can’t really tell you if the expected behavior is ‘good’ - that’s a user test or an acceptance test thing. The value of unit tests is huge - in python, in particular, it’s super easy to make changes with unforseen consequences and unit tests are the canary in the coalmine that tells you when a change over here has an unexpected ripple over there. As long as that’s all you expect from them they are incredibly valuable.
Just yesterday, for example, my unit tests found an awesome and very subtle bug that was introduced by accident when somebody ran a code formatter on existing code. No changes at all - but by removing a comma in this expression:

VERSION = ( {'version': (0,1), 'date', 'user':None}, ) 

into this :

VERSION = ( {'version': (0,1), 'date', 'user':None} ) 

the formatter turned VERSION from a 1 -unit tuple into a dictionary. It’s a lot to expect a coder to spot that by eye! The code compiled, it even ran – and for 99% of things it does it worked. But the unit test correctly pointed out that the expected variable type had changed. And it pointed that out when the change was made - not two weeks from now when a user actually hit bad data and had a crash inside a tool, so we saw the problem and fixed it right away instead of combing through dozens of files to figure out why the XXX dialog was crashing or whatever.

So, unit tests are awesome. Just don’t expect them to eliminate all bugs. They just give you a higher class of bugs - bugs caused by dumb algorithms and bad architecture instead of typos or sloppy programming.

I run my tests in maya standalone - for anything except GUI stuff (which even Real Programmers™ find it hard to unit-test anyway) it’s equivalent, and way faster to iterate on. I sometimes will use mocks to handle commands that are too slow or situational for good tests. Here’s an example of some tests from mGui:

Created on Mar 3, 2014

@author: Stephen Theodore

from unittest import TestCase
def control_mock(*args, **kwargs):
    LAST_ARGS['args'] = args
    LAST_ARGS['kwargs']=  kwargs
import maya.standalone

import maya.cmds as cmds
cmds.control = control_mock

CONTROL_CMDS = ['attrColorSliderGrp',

import mGui.core as core
import maya.mel as mel
import inspect
import as properties
import mGui.gui as gui

class test_CtlProperty(TestCase):
    very dumb test that just makes sure the CtlProperty is calling the correct command, arg and kwarg

    class Example(object):
        CMD = cmds.control
        def __init__(self, *args, **kwargs):
            self.Widget = 'path|to|widget'
        fred = properties.CtlProperty("fred", CMD)
        barney = properties.CtlProperty("barney", CMD)
    def setUp(self):
        LAST_ARGS['args'] = (None,)
        LAST_ARGS['kwargs']= {}
    def test_call_uses_widget(self):
        t = self.Example()
        get = t.fred
        assert LAST_ARGS['args'][0] == 'path|to|widget'
    def test_call_uses_q_flag(self):
        t = self.Example()
        get = t.fred
        assert 'q' in LAST_ARGS['kwargs'] 
    def test_call_uses_q_control_flag(self):
        t = self.Example()
        get = t.fred
        assert 'fred' in LAST_ARGS['kwargs'] 
    def test_set_uses_widget(self):
        t = self.Example()
        t.fred = 999
        assert LAST_ARGS['args'][0] ==  'path|to|widget'

    def test_set_uses_e_flag(self):
        t = self.Example()
        t.fred = 999
        assert 'e' in LAST_ARGS['kwargs']
    def test_each_property_has_own_command(self):
        t = self.Example()
        get = t.fred
        assert 'fred' in LAST_ARGS['kwargs']
        get = t.barney
        assert 'barney' in LAST_ARGS['kwargs']
    def test_access_via_getattr(self):
        t = self.Example()
        get = getattr(t, 'fred')
        assert 'fred' in LAST_ARGS['kwargs']

    def test_access_via_dict_fails(self):
        t = self.Example()
        assert not 'fred' in t.__dict__

The mock bit is there to validate that the mGui controls do what they are supposed to, that is translate python like:

button1.backgroundColor = (1,1,0)

into vanilla maya like

cmds.button('button1', e=True, backgroundColor = (1,1,0))

The mock version of cmds.control doesn’t do anything except record the last commands that were issued, but that’s fine - it’s all I want to test. This doesn’t prove the whole system is great but it does let me twiddle around with metaclasses and whatnot and be confident that the output of the whole thing is stable despite changes under the hood.

The only real drawbacks to tests are (a) they’re boring to write, (b) it’s tough to justify them to producers and users who want more features and © they punish you for writing spaghetti code. That last one is really a feature, not a bug, but it is hard to get enthusiastic for setting off to discover what a lousy coder you are :slight_smile:


I should add in reference to the original question: I’m not sure that running your tests in a completely non-maya environment will repay the effort. Lightweight mocks are a great tool for simplifying tests and getting you off the hook for complex scene setup in your test environment. Fancy mocks which need to replicate the actual behavior of the systems they stand if for are a significant investment. I think the venn diagram intersection of ‘people who will run these maya specific tests’ and ‘people who don’t have maya’ isn’t big enough to repay the work you’d be doing.


we also use maya standalone for unit tests. They simply make sure your code is importable, error free, and mostly does what you thought it would. We still also do a full smoke test and test all the UIs.

+1 to them being boring to write