My Humble Pipeline [MHP] project

python
maxscript
pipeline

#21

Interesting point Theo… But knowing our “working style” I admit finding removal of options difficult.
Given a blend file I could figure 3 cases:
1- one mesh object only tobe exported, so I could precisely name it and export without user’s action
2- many meshes from a common logic set (i.e. Kitchen blend file and stuff like Table , Chair inside): the blend name is the set name, each mesh has to be exported in its fbx. For artist’s easy view objects can be placed anywhere in scene but export will (temporarily) place them at origin so their pivot in UE will be ok.
3- Many mesh that need to be exported as a single fbx and then imported as a combined mesh. Here I have blend file similar to case 1 but the number of files to be produced changes. This was the main reason I considered the single/multi option. Ho could I identify blend file that are NOT set but can contain many meshes?
Maybe adding something to its name? Like Kitchen_Set.blend vs ComplexObject.blend

As always, a lot of stuff to consider :slight_smile:


#22

A better way to think about it is not ‘removing options’ but ‘removing options at export time’. You can move a lot of that info to a one-time set up step that happens the first time a file is exported.

I’ve never been fond of the multiple-files-exported-from-one thing, but I know artists love it so I grudgingly support it – each output file is represented in the scene by a transform with a custom attribute that gives the file name, then I just loop over those nodes to do the exports.

The big thing you want to avoid is just human error at export time – that causes all sorts of problems. And since exporting is something that artists do all day every day, even a fraction of a 1% error rate will add up fast.


#23

The best exporters are the ones that can be (re)run without a user at all.


#24

Our principal modeler is a real fan of this stuff. As old-time-3dgeneralist I understand his point after all :slight_smile:
A proxy object in scene with some informations looks good: it could be created at first export time or even with a small wizard when the scene is created…

I should add this to my tips :slight_smile:


#25

Hi everyone!
Year 2019 is here and it’s time to make some progress!
With the help from many of you on slack channels I’m starting to have some micro-tools working: the first one is the “project creator”: it creates fresh UE4 projects and setup their p4 depot and stuff.
Nice start…

Since standalone tools will be (I suppose) relatively small and final users (aka “mostly artists”) shouldn’t play with Python, I have the idea to deploy small tools as .exe files.
I’ll keep .py for DCC/Unreal scripts that don’t need an installed interpreter (since it’s bundled with the software).
So at the moment I’m thinking about how to store and deploy stuff: I’ve read a lot of old posts here and already talked briefly on slack about this topic. I know that some of you prefer a clean copy-paste of needed modules on user’s machine instead of EXEs, but I’m wondering about mixing the twos…
My idea is:

  1. A repository used for development only where to keep all python code (standalone tools, DCC scripts,…). Artists will not have this.

  2. Another repository (we already have it) that it’s basically an Unreal4 project with a library of shaders and blueprints that we cannibalize migrating assets to work projects to reuse good stuff developed. This one should be shared with artists since they have the possibility to store there useful assets they do while working (usually we briefly identify together the assets, then the artist documents it a little, gives it a proper name and pushes it)

  3. A shared folder where the developer (me) copies tested and completed scripts and tools. Here all dependencies and modules will be also stored.
    The idea is to write a setup tool (an executable) that once launched by the end user will do the dirty job of copying all the files in one or more local folders.
    This script will also create (the first time) system environment variables used by any other tool/script to access these local folders.
    So for example an unreal python script could import Qt adding a python path read from a ‘TOOLS_PATH’ env variable and so on.
    This should allow me to avoid users (and me) the task to keep their python installation and package folder up to date: each time I’ll release something new they only need to re-launch the script.
    I remember that @Theodox told me somewhere about this and probably @R.White sponsored me the idea of give users not-exe-packaged stuff where possible.

Do you think this could be a good plan? :slight_smile:

** EDIT **
While chatting on Slack where pointed out that it’s better to avoid .exe files for tools except for one or two used to setup the environment and install stuff. As Dhruv suggested, many tasks could be done by simpler batch files avoiding .exe for that purpose too.
All other tools will require than that the user have his/her own local Python installation: all packages and modules should be distributed using a clean folder where files are copied, without messing up with system package library.


#26

At my studio we set up central Python virtual environments for Mac/Linux/Windows and we enforce our script to use them for everyone. This way we can be certain that no one would miss any required modules. All scripts also gets called from server to make sure everyone gets the same thing.

Having to take care of modules (install, update, versioning, etc) on every workstation would be too much hassle and too prone to errors.

On Windows we deploy bat file which points to Python.exe and scripts on server path.


#27

How large is your team? I can imagine running all users off a single remote machine could create a bottleneck in certain situations.


#28

It’s an idea I considered but since our servers are, well, not exactly great machines I dropped it.
Moreover I’ve read many times that in theory virtual envs are more for development not for official releases… Of course any of us can make his/her own decision, but as inexperienced python dev I prefer to not take non-standard routes :wink:
Your system surely simplify maintenance, but asked by Bob doesn’t this cause bottlenecks or issues in case of net problems? Users are always bounded to the server status.


#29

About 200. Apart from the few seconds at script launch we don’t have any bottle neck on that side. I would say scripts are tiiiiiny compare to all other traffics going on in our network.
Those that do coding do manage their own local virtual environment based off the same requirement file.


#30

My feeling is that virtualenvs are really a developer tool.

They’re vital for developers because they let you work in isolation: it works on my machine! syndrome is almost always the result of you relying on something without realizing that you need it – and then shipping tools off to people who don’t have it, whatever “it” is. If you work in a virtual env you can be 100% sure you know what’s there. I’ve probably got 20 virtualenvs on my machine right now – but when I send stuff to users it’s a plain of bucket of files. I’m pretty leery of actually distributing them to non-technical users although you could probably manage a system that spun them up from scratch using bat file or the like.

So, I love virtualenvs as a disciplined way to track what you actually need for your tools. But, for users, I prefer them a single, complete environment that they don’t have to touch. Most artists don’t want to manage a python ecosystem and thus are not very good at it – and often the ones who are interested in trying to do it end up breaking your stuff by doing something that seems innocent, like installing a package which breaks your tool on that one machine and isn’t visible without hours of version-sleuthing.

I’m currently working on a method for building a complete environment using pipenv on the developer side and a build script that takes a vanilla Python environment, clones it, and then uses pipenv and Pipfiles to build a working distribution. That whole thing gets shipped to users so we have (in theory!) completely self-contained, 100% reproducible environments. However I have the luxury of relying on an inhouse tool for distributing the bits, so I don’t have to try to be economical about transfer rates and so on.

The main hassle factor so far has been that pipenv – like any virtualenv based strategy – depends on developers creating fully specified packages, like what you see on the PyPi . That means you have to wrestle with the horror that is setup.py, which is a pain in the ass. I’m still experimenting to find the right granularity for the pieces – I like doing lots of smaller projects so that the version control history is clear and useful but the setup.py tax may push me towards a smaller number of bigger chunks.

The “right” way to get the actual bits into the hands of users may really be something you want to negotiate with your IT folks instead of solving it on your own – they may have tools or options you don’t know about. It’s certainly worth asking. The mechanics are really not too important, though – the big thing is to decouple the way you get the bits to your users from their user experience. Whether it’s a .bat file or a launcher application or something built in to another tool, you want the average user’s experience to be simple and convenient (and very hard to bypass!) – if it becomes too time consuming or too manual they will quietly opt out, and then you’ll spend lots of time debugging a ‘problem’ that you know is solved before you realize that user X is on a two-month-old version of the tool.


#31

Hi!
So after some days and a bucket of good advice I managed to build a very rough pipeline for packing and distribute my tools to other people here at the studio.
Please note again the word rough :sweat_smile:
I have a virtual environment for developing and a release_lib.py script that I (and only me) call to package stuff.
It basically:

  1. Creates a release_timestamp folder

  2. Uses pip freeze to generate an updated requirements.txt

  3. pip install the requirements in a fresh new folder under release’s root

  4. Copy specific folders and files from the development subtree to the release folder (tools, custom packages…)

So in the end I should have this release folder ready to be stored on a shared folder on our server.

On the other side the user have to launch a setup_lib.exe that it’s just a python executable: I know that we discussed about using a bat file instead of a compiled exe, but since I wanted to use this moment to check for some folder and variables too, I ended up with this solution.
The setup_lib asks for 2 folders, on being the place where to put the library. It checks first for an environment variable: if present it prompts for a setup in a known location, if not the user (or the install-man) must tell where with a simple browse folder dialog.
When given the ok, the script:

  1. Updates the environment variable.

  2. Check for a python installation (I rely on standard python installer) and if not present OR if it isn’t the required version launch the installer (stored on server)

  3. Copies all stuff from the shared folders to the desired place

  4. Creates/updates PYTHONPATH env variable to make it include both needed site-packages and custom-packages. No other package is installed in system python.

I’ve tested the proper functioning of a couple of tools and it seems ok… I’m sure there is a huuuuuge space for improvements, but it’s a comforting beginning :slight_smile: