Maya mock for tests

Does anyone use, or has created a mock of Maya’s maya.cmds module? E.g.

try:
    from maya import cmds
except ImportError:
    from mock.maya import cmds

# Do test here

I’m looking to have tests run without the need for Maya, such as on travis-ci or any other form of distributed testing. I realise not everything can reliably be mocked, my requirements are:

[ol]
[li]Create node
[/li][li]Create attribute
[/li][li]List node
[/li][li]List attributes
[/li][li]Connect/Disconnect attributes
[/li][li]Nest node
[/li][li]File open/save
[/li][li]Modify workspace
[/li][/ol]

And though I’m sure more would surface in time, the goal is to make only the most straightforward tests available outside of Maya.

Best,
Marcus

why not just do your tests with the maya standalone interpreter

Hi passerby, thanks for your reply.

Without context, I would assume it’s safe to rephrase your question as “why use mocks?”, which is quite a broad topic.

But in short, running them via the standalone interpreter requires the standalone interpreter and I’m looking to distribute the running of tests to machines not necessarily equipped with Maya.

If you’re interested in some of the benefits of mocking, have a look at how mocking works with databases; the issues and use-cases are identical to working with Maya.

Best,
Marcus

1 Like

how would this be a valid test? Maya does some fancy things with returns that you cannot (reasonably) account for, for example:



node = cmds.createNode('locator', name='foo')
assertEqual(node, 'foo')
node2 = cmds.createNode('locator', name='foo')
assertEqual(node, 'foo')

would fail in maya, but not in a mocked enviornment

Thanks TheMaxx for your input.

In a nutshell, the features included in tests would rely on ideal circumstances within Maya; that is, I’m not looking to test whether Maya works or how it works, nor am I looking to test the integration between tools and Maya, but rather whether the behaviour of the tool is as expected and some behaviours are inherently dependent on features provided by the integration, and thus the maya.cmds module.

Having said that, there will of course be tests utilising mayapy and tests running within the GUI as well and I think its safe to say that I’m not looking for a full replacement of the maya.cmds module, just enough to get tests focused on the tool as opposed to its integration to run safely within a coninuous integration environment.

How are you guys running tests on tools that depend on Maya?

Best,
Marcus

We don’t run any automated tests on our Maya Tools. With each release, we go through a smoke test that encompasses the entire pipeline.
The TAs have a special development environment where we can sync each other’s changes ahead of release to do sanity checks.

Individual tools can log success/failure to a database as users use them, otherwise the users will report a bug if a problem slips through.

We’ve discussed unit tests and other strategies, but for our situation its not worth the extra investment.

The goal of this kind of testing is to prove that the code is doing what it’'s been told to do despite upstream changes. That’s not the same thing as guaranteeing that it works! A unit test proves that a given chunk of code, ideally a nice small chunk, is doing expected stuff. Unit tests can’t really tell you if the expected behavior is ‘good’ - that’s a user test or an acceptance test thing. The value of unit tests is huge - in python, in particular, it’s super easy to make changes with unforseen consequences and unit tests are the canary in the coalmine that tells you when a change over here has an unexpected ripple over there. As long as that’s all you expect from them they are incredibly valuable.
r
Just yesterday, for example, my unit tests found an awesome and very subtle bug that was introduced by accident when somebody ran a code formatter on existing code. No changes at all - but by removing a comma in this expression:


VERSION = ( {'version': (0,1), 'date':datetime.now, 'user':None}, ) 

into this :


VERSION = ( {'version': (0,1), 'date':datetime.now, 'user':None} ) 

the formatter turned VERSION from a 1 -unit tuple into a dictionary. It’s a lot to expect a coder to spot that by eye! The code compiled, it even ran – and for 99% of things it does it worked. But the unit test correctly pointed out that the expected variable type had changed. And it pointed that out when the change was made - not two weeks from now when a user actually hit bad data and had a crash inside a tool, so we saw the problem and fixed it right away instead of combing through dozens of files to figure out why the XXX dialog was crashing or whatever.

So, unit tests are awesome. Just don’t expect them to eliminate all bugs. They just give you a higher class of bugs - bugs caused by dumb algorithms and bad architecture instead of typos or sloppy programming.

I run my tests in maya standalone - for anything except GUI stuff (which even Real Programmers™ find it hard to unit-test anyway) it’s equivalent, and way faster to iterate on. I sometimes will use mocks to handle commands that are too slow or situational for good tests. Here’s an example of some tests from mGui:


'''
Created on Mar 3, 2014

@author: Stephen Theodore
'''

from unittest import TestCase
LAST_ARGS = {}
def control_mock(*args, **kwargs):
    LAST_ARGS['args'] = args
    LAST_ARGS['kwargs']=  kwargs
    
import maya.standalone
maya.standalone.initialize()


import maya.cmds as cmds
cmds.control = control_mock



CONTROL_CMDS = ['attrColorSliderGrp',
        'attrControlGrp',
        'attrFieldGrp',
        'attrFieldSliderGrp',
        'attrNavigationControlGrp',
        'button',
        'canvas',
        'channelBox',
        'checkBox',
        'checkBoxGrp',
        'cmdScrollFieldExecuter',
        'cmdScrollFieldReporter',
        'cmdShell',
        'colorIndexSliderGrp',
        'colorSliderButtonGrp',
        'colorSliderGrp',
        'commandLine',
        'componentBox',
        'control',
        'floatField',
        'floatFieldGrp',
        'floatScrollBar',
        'floatSlider',
        'floatSlider2',
        'floatSliderButtonGrp',
        'floatSliderGrp',
        'gradientControl',
        'gradientControlNoAttr',
        'helpLine',
        'hudButton',
        'hudSlider',
        'hudSliderButton',
        'iconTextButton',
        'iconTextCheckBox',
        'iconTextRadioButton',
        'iconTextRadioCollection',
        'iconTextScrollList',
        'iconTextStaticLabel',
        'image',
        'intField',
        'intFieldGrp',
        'intScrollBar',
        'intSlider',
        'intSliderGrp',
        'layerButton',
        'messageLine',
        'nameField',
        'nodeTreeLister',
        'palettePort',
        'picture',
        'progressBar',
        'radioButton',
        'radioButtonGrp',
        'radioCollection',
        'rangeControl',
        'scriptTable',
        'scrollField',
        'separator',
        'shelfButton',
        'soundControl',
        'swatchDisplayPort',
        'switchTable',
        'symbolButton',
        'symbolCheckBox',
        'text',
        'textField',
        'textFieldButtonGrp',
        'textFieldGrp',
        'textScrollList',
        'timeControl',
        'timePort',
        'toolButton',
        'toolCollection',
        'treeLister',
        'treeView']    


import mGui.core as core
import maya.mel as mel
import inspect
import mGui.properties as properties
import mGui.gui as gui

class test_CtlProperty(TestCase):
    '''
    very dumb test that just makes sure the CtlProperty is calling the correct command, arg and kwarg
    '''

    class Example(object):
        CMD = cmds.control
        
        def __init__(self, *args, **kwargs):
            self.Widget = 'path|to|widget'
        
        fred = properties.CtlProperty("fred", CMD)
        barney = properties.CtlProperty("barney", CMD)
        
    def setUp(self):
        LAST_ARGS['args'] = (None,)
        LAST_ARGS['kwargs']= {}
        
    def test_call_uses_widget(self):
        t = self.Example()
        get = t.fred
        assert LAST_ARGS['args'][0] == 'path|to|widget'
        
    def test_call_uses_q_flag(self):
        t = self.Example()
        get = t.fred
        assert 'q' in LAST_ARGS['kwargs'] 
        
    def test_call_uses_q_control_flag(self):
        t = self.Example()
        get = t.fred
        assert 'fred' in LAST_ARGS['kwargs'] 
        
    def test_set_uses_widget(self):
        t = self.Example()
        t.fred = 999
        assert LAST_ARGS['args'][0] ==  'path|to|widget'

    def test_set_uses_e_flag(self):
        t = self.Example()
        t.fred = 999
        assert 'e' in LAST_ARGS['kwargs']
        
    def test_each_property_has_own_command(self):
        t = self.Example()
        get = t.fred
        assert 'fred' in LAST_ARGS['kwargs']
        get = t.barney
        assert 'barney' in LAST_ARGS['kwargs']
    
    def test_access_via_getattr(self):
        t = self.Example()
        get = getattr(t, 'fred')
        assert 'fred' in LAST_ARGS['kwargs']

    def test_access_via_dict_fails(self):
        t = self.Example()
        assert not 'fred' in t.__dict__
        

The mock bit is there to validate that the mGui controls do what they are supposed to, that is translate python like:


button1.backgroundColor = (1,1,0)

into vanilla maya like


cmds.button('button1', e=True, backgroundColor = (1,1,0))

The mock version of cmds.control doesn’t do anything except record the last commands that were issued, but that’s fine - it’s all I want to test. This doesn’t prove the whole system is great but it does let me twiddle around with metaclasses and whatnot and be confident that the output of the whole thing is stable despite changes under the hood.

The only real drawbacks to tests are (a) they’re boring to write, (b) it’s tough to justify them to producers and users who want more features and © they punish you for writing spaghetti code. That last one is really a feature, not a bug, but it is hard to get enthusiastic for setting off to discover what a lousy coder you are :slight_smile:

2 Likes

I should add in reference to the original question: I’m not sure that running your tests in a completely non-maya environment will repay the effort. Lightweight mocks are a great tool for simplifying tests and getting you off the hook for complex scene setup in your test environment. Fancy mocks which need to replicate the actual behavior of the systems they stand if for are a significant investment. I think the venn diagram intersection of ‘people who will run these maya specific tests’ and ‘people who don’t have maya’ isn’t big enough to repay the work you’d be doing.

we also use maya standalone for unit tests. They simply make sure your code is importable, error free, and mostly does what you thought it would. We still also do a full smoke test and test all the UIs.

+1 to them being boring to write

Not to necro this thread unnecessarily, but i’t’s worth pointing out that since this was written, the mock library has become part of the standard library. There’s a Python 2.7 version here:

This is a good alternative to the hand-written mocks in the above, although the caveats about what is and isn’t a good candidate for mocking still remain. The mock library has some nice syntax sugar for mocking out Maya calls – this example has two tests that fake a call to cmds.file with predictable return values :

class TestSceneIsSaved(TemporaryProjectTest):
    def test_scene_is_saved_fail(self):
        with mock.patch('maya.cmds.file', return_value=''):
            example = checks.SceneIsSaved()
            self.assert_check_fails(example)

    def test_scene_is_saved_pass(self):
        with mock.patch('maya.cmds.file', return_value="C:/ul/project/deleteme3.ma"):
            example = checks.SceneIsSaved()
            self.assert_check_fails(example)
4 Likes

Im just about to dive into this world again too - and @Theodox is absolutely correct in the unit tests being validation for small function that have quantifiable results. Mocks though I get, find basically impossible to implement in the context of Maya as you’d be emulating essentially cmds, Maya.api.OpenMaya etc etc potentially in their entirety which kinda defeats the point.

The problem is we’re testing against and interpretation of an existing interpretation, i.e. mayapy wrapping base python so really we should be running unit tests with the assumption that Maya is our default python interpreter and not try to mock its functionality other than additional libs etc we’re using.

For me, the big reason to write tests that don’t depend on mayapy is that I can run that on a server somewhere as part of the usual build verification.

While I haven’t been doing this kind of work against Maya recent, we treat unit tests for our world editor the same way. And yes, it does sometimes require a lot of tedious mocking.

But if you can get away with using mayapy as your test environment, it does save you a lot of extra work.

1 Like

It’s a totally legit strategy to write tests that always pass outside of Maya if you’re running the tests outside, if that keeps you testing consistently. The big issue is not to do the reverse: non-maya code has to be isomorphic in and out of maya, but maya specific code has no real meaning outside of maya so simply saying “it’s fine, what do I know?” is legit.

2 Likes