Deploy tools for your Maya team

I found this interesting info from 2012:

And some other suggestions.

How do you deploy the tools to your team so it can be as automated as possible. Deploying, updating and all.

I have an SVN server working (i know, it’s quite old). Everyday we update all tools, everytime people’s pcs are turned on.
But now we are starting to work on Maya, and I need some way to deploy the tools.

We need some way they get updated unnoticed everyday AND that they could update the svn folders and everything else everytime we tell them to do by hand, the easiest way.

We can set a folder , and add it to the Python environment variables on the Maya startup script.
We also want to add Shelves with tools.

But how do you do it? I need other suggestions.

Updating everyday is dangerous. What if you check in some bad code? Updates should be regular, and frequent, but give yourself some time to test your code before rolling it out to the whole team.

Our Maya Tools live in source control (perforce), and we have an installer that deploys all the tools. We release updates generally twice a week. Before an update is released to the team, the build is tested by a small group of individuals from each discipline. Tests are not exhaustive, they only test the pipeline works from end to end. Small issues can and do slip by this process.

i tend to use maya modules to point to network locations for scripts, and only really use git for devlopemeant. I just have a script that runs daily that takes stuff from the master branch of my git repos and pushes it to the network location that maya is loading its scripts from. I have it only pushing out stuff in my master branch and not my feature and devlopmeant branches since it is reallly bad to push code that is too new into a production envirmeant. People that are willing to test devlopmeant versions can clone them from git themselfs.

here is info on modules in maya

Can you guys state how many artists do you do scripts for?

I do scripts for a team of ~10 maya users.

Code in git.

As for compiled files:

Our past system:
All .pyc files on server, userSetup.py was given to each user which automatically rebuilds maya shelf every startup from .XML on server.
All buttons in shelf link to server location.

New system:
Still build shelf using userSetup.py.
Currently only in use for new main pipeline tool. Store all versions of tool on server. Script check every time when it’s ran if there is new version, copies version to local drive and runs everything there. Script automatically install “pip” for mayapy.exe and necessary libs.
I push to server using simple push.py script on my local drive - python push.py minor “comment This was updated”

We update scripts as often we need, we don’t really have schedule, if something breaks it’s usually fixed in 5 minutes when someone finds it out. If there is major fuckup, we can always revert, just delete last version from server and all tools automatically revert to previous version.

We require everyone on the team to launch their tools using an in house tool launching app. This app starts off by launching a separate configuration app that insures your computer is configured correctly and has all the required software installed. (including Maya). If the required tools are not installed, it installs them. Both of these tools are written in C#. We just fixed the main tool launching app to run on Mac under Mono.

Any tool launched has its environment controlled. In the case of Maya we set up some environment variables, some paths and tell it to run a specific startup script at launch. This script and the entire scripting environment comes from one of 4 places: Three of these copy tools to a local cache location on your local hard drive. They copy from our “release” network share (the default), our “beta” network share, or out of your local version control (Perforce in our case) directories. We sync perforce automatically before copying the tools out of it. The 4th option is to run tools live from version control (Perforce). This is what our TAs do when they’re working on scripts. No one else is supposed to use this mode. TAs can publish tools to the Beta and Release network shares to have artists test it and/or once they trust the code they’ve written. The reason we copy tools for the end users is so that Maya’s file locks won’t interfere with deploying new versions of the tools, or pulling files to your machine (syncing) in Perforce. This system allows us a great deal of flexibility. For instance, an external outsourcing vendor can pull their tools directly out of Perforce (they don’t have access to our network shares), or they can set up their own network share and we can push tools to that. Since almost everything automatically configures itself, we have reduced new user setup support to almost 0.

I said “everyone on the team” uses this tool launcher, and this holds true. For instance our engineers enjoy the same system, and know that when they launch Visual Studio they will have the correct version with the correct options and add-on tools installed.

EDIT: I forgot to mention: our Maya environment is completely vanilla. We don’t touch any of the default startup scripts. This avoids conflicts with 3rd party tools and scripts that muck around with these. Our artists aren’t allowed to run externally created scripts without previous vetting, but we can’t always police this. Our system just makes sure that anything they’ve done in this area doesn’t pollute or interfere with our production environment.

@btribble, just of curiosity, do you have to deal with multiple platforms? Mac/Linux/Windows, how does that go? If no, what happens when outsourcing party prefers to use different platform?

We’re Windows all the way, even on our build farm(s). We only have limited support for other platforms. Thank God for small miracles.

EDIT: We did write most of the systems to be platform neutral, but as time goes by, Windows specific processes such as DOS commands and path handling have crept in. We could broaden our support for other platforms, but we’d just be trading a theoretical savings on the OS side of things for very tangible costs in terms of man hours to not only add the support but to maintain it. Bootcamp (etc.) is our answer to the diehard Mac fans.

EDIT2: We don’t work with external vendors who can’t support our needs. This hasn’t ever been a significant issue for us, but we usually end up in long multi-year relationships. We’re in game development and therefore don’t have to “strike the set”. Again, small miracles…

There’s a more detailed description of what I do here.

Basically I have a server which updates on every github checkin, runs unit tests, and then compiles all the .pys in my toolset to .pyc in a zip file. userSetup.py grabs the zip file if the local version is out of date – we currently keep it on a network share but I’m moving that to a web server for more control and security in the future. I’m very fond of this system because it makes sure that the state on user’s machines is monolithic: there is no chance of leftover files or old pycs messing things up.

I’ve been super happy with aggressive unit testing to make sure that bad things don’t go out into the wild, and git is great because the guys writing tools can still have local version control without triggering builds just because they want to do some version history on a file they are working on. The server only worries about the remote repo so I can update dozens of files locally, run them for myself, but only trigger a build when I push all of the commits up to github.

The single zip file is also outsource friendly: just drop our userSetup and zip file into the scripts folder and you have the entire tool set, and when your done just delete them and you have vanilla maya again.

Previous TAO discussions here:

Yeah Theodox, that reminds me, we need to add unit testing to our tools publishing process. We’ve put that off for far too long. Thankfully, we’ve pretty pretty good about not breaking things…

Do you use an off the shelf testing framework ala PyUnit? If so, what has your experience been?

I use nose, and the build server – which is just a python script – uses that to run the tests and verify that they all pass. I try to test library code pretty heavily, but I don’t test UI stuff – that’s just a cost-benefit calculation based on the hassle of testing gui, but since I’m a big nazi about keeping functional and gui code separate I still get decent coverage.

The really hard part is retrofitting tests to older code: i’ve found it’s pretty useful as an onboarding thing for new TAs to familiarize them with the code base and get some long term value. Of course there’s never enough time for all the tests that should get written… Even really dumb ones catch a lot of bad things before they happen.

I had an idea, probably not an original one, to use an internal pypi server. I split up code into modules tha thave setup.py files as if they were theyre own projects with their own documentation and tests. Then you could have a server watch for changes on a main branch, like theodox uses, that would do the build and tests for each module. Then if everything looked good, it would upload a new version to the pypi server. A “build” would simply be a requirements.txt file that had pegged versions of things. pip could just clear out the old stuff and install the new.

This allows you to send specific users a req.txt file to test new builds of things without bugging the rest of the team. And if shit went south, you would have all previous know working versions up and ready, just use a different req.txt file.

You could also use this for outsource. Either give them access to your pypi server or just send them a zip with your packages ready for pip using wheel or egg.

You could have usersetup or a launcher run pip before loading maya and it would require almost no interaction by the users

Anyhow, i thought it was a cool idea :slight_smile:

It is an interesting take: I think robG did this at CCP as well. I looked into it but came away feeling like it was better for developers, but worse for artists.

The main I saw was dependency management: if you rev package X to support something in Y, you may also get changes in Z which also depends on the older X. Unfortunately you won’t know about them until you find them at runtime, and when you do it may not be clear where they really originated: “this was working last week but now it’s broken – and I didn’t change it!” Since the packages may be revving independently of each other, you have to check the state of every module independently if you’re trying to guarantee that conditions are the same between two different installs.

This is sort of the Achilles heel of all Python version management, its why so many people end up going with virtualenvs instead in regular old python development. It’s not an easy thing to solve without tons of redundant modules or building version knowledge into the imports directly.

The way we have our stuff set up is fairly simple, but it works fairly well for us. We just have a bat file for installation that adds our scripts directory to the Maya Script Path environment variable, we then have a userSetup in there that installs other modules that make a menu, add new script/plugin paths, and custom shelf loaders. We give more techy artists the ability to use a userSetup file by just asking them to name them customSetup.py (or mel) and just have those called during startup through our userSetup.

We use perforce for our tools, and interface with p4Python (I know there are some SVN python libraries out there) so if we want to push an update, we just have this one file that we check out and submit. On startup, Maya checks that file, and if it’s out of date, and there are other files in the Maya Tools directory out of date, it triggers a pop-up telling them that there’s an update, and it will sync the tools directory if they press yes. If that update trigger file is the only thing out of date (because that user might have just manually updated before the update got pushed) we just silently update it at that point to avoid a false positive later. That way we can make sure things are properly tested and all dependencies are checked in before we auto-update. Then there are also menu items and shelf buttons where they can just manually update the tool directory.