Simple Threaded jobs for Maya (python 3 only)

A common issue for folks trying to do expensive work at Maya startup is how to do background work properly. Maya is very finicky about work done outside the main thread – even seemingly innocent calls like print() can be problematic when they issue from another thread, and touching the UI or the scene from another thread is almost guaranteed to cause problems.

Current Python provides some nice ways to wrap this up more neatly than the traditional method of creating a thread and polling an output queue. This is an example of a simple way to run jobs outside the main Maya thread and get thread-safe callback at the end without much extra work:

import concurrent.futures
import maya.utils
import contextlib
import maya.cmds as cmds

class DeferredJob (object):
    def __init__(self, max_workers = 4):
        self.exect = concurrent.futures.ThreadPoolExecutor(max_workers)
        self.futures = []

    def add(self, job, callback = None):
        job_future = self.exect.submit(job)
        if callback:
            def maya_cb(*_):                                                                
                maya.utils.executeDeferred(callback, job_future.result())

    def __enter__(self):
        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
    def results(self):
        return tuple(self.futures)

This creates a context manager which will run all submitted jobs as parallel threads. If you need safe notifications about a completed job, add a callback to the “add” call. That callback will fire on the main thread when the job completes with the future object representing the job as an argument. You can poll the result() of the future or check to see what errors it might have generated.

Simple example:

with DeferredJob() as test:

    # you would not want to do this in regular threads - it would lock up your maya !
    def example():
        return cmds.polyCube()
    def example2():
        return cmds.polySphere()
    test.add(example, print )
    test.add(example2, print)

# after a second:
# ['pCube2', 'polyCube2']
# another second later
# ['pSphere2', 'polySphere2']

Should add for completeness sake that if you are doing work in those threads that touches the Maya scene while running, the old Maya limits still apply: ie, this is great for connecting to a database or downloading a file over HTML but not for trying to do vertex edits “simultaneously” on a bunch of Maya scene objects.

If it’s a math heavy operation you can do things like pass each job the scene data up front (like, say, a vertex list) , do the work in the thread w/o Maya API calls, then pass the results back to be applied in the scene. Generally Maya scene work does not want to live anywhere but the main thread, but for some things – like, say, generating convex hulls – you might be able to get a boost from that…


Qt Signals are also a really good way of seamlessly handling this sort of stuff. When a slot is invoked it runs in the context of the thread in which it was created.

but yeah, ThreadPoolExecutor is very handy, have you tried the the wait and as_completed functions are really handy

Doesn’t the global interpreter lock prevent Python from actually being multi-threaded, not processed concurrently? I assume these libraries get around that somehow…

Correct, there’s no “cpu concurrency” here – so kicking of multiple threads will not improve CPU throughput. However for IO bound tasks (like downloading something from the internet) this is effectively non-blocking, which is good enough. Very few GUI apps really do a good job of full concurrency even in other languages, it’s a much more complex programming model once you move past things like “do a lot of math on a lot of cores”

1 Like

Ic, thanks for the clarification! I always wondered what the advantage is to using multi-threading in Python. The only cases I have used it is where I don’t want my Qt window to freeze while performing a task in Maya. I have heard about how much of a pain multithreading is with race conditions and deadlocks.

Just to clarify, Concurrency works great in python, its Parallelism thats a bit more complicated to achieve.