My Humble Pipeline [MHP] project

Interesting point Theo… But knowing our “working style” I admit finding removal of options difficult.
Given a blend file I could figure 3 cases:
1- one mesh object only tobe exported, so I could precisely name it and export without user’s action
2- many meshes from a common logic set (i.e. Kitchen blend file and stuff like Table , Chair inside): the blend name is the set name, each mesh has to be exported in its fbx. For artist’s easy view objects can be placed anywhere in scene but export will (temporarily) place them at origin so their pivot in UE will be ok.
3- Many mesh that need to be exported as a single fbx and then imported as a combined mesh. Here I have blend file similar to case 1 but the number of files to be produced changes. This was the main reason I considered the single/multi option. Ho could I identify blend file that are NOT set but can contain many meshes?
Maybe adding something to its name? Like Kitchen_Set.blend vs ComplexObject.blend

As always, a lot of stuff to consider :slight_smile:

A better way to think about it is not ‘removing options’ but ‘removing options at export time’. You can move a lot of that info to a one-time set up step that happens the first time a file is exported.

I’ve never been fond of the multiple-files-exported-from-one thing, but I know artists love it so I grudgingly support it – each output file is represented in the scene by a transform with a custom attribute that gives the file name, then I just loop over those nodes to do the exports.

The big thing you want to avoid is just human error at export time – that causes all sorts of problems. And since exporting is something that artists do all day every day, even a fraction of a 1% error rate will add up fast.

The best exporters are the ones that can be (re)run without a user at all.

Our principal modeler is a real fan of this stuff. As old-time-3dgeneralist I understand his point after all :slight_smile:
A proxy object in scene with some informations looks good: it could be created at first export time or even with a small wizard when the scene is created…

I should add this to my tips :slight_smile:

Hi everyone!
Year 2019 is here and it’s time to make some progress!
With the help from many of you on slack channels I’m starting to have some micro-tools working: the first one is the “project creator”: it creates fresh UE4 projects and setup their p4 depot and stuff.
Nice start…

Since standalone tools will be (I suppose) relatively small and final users (aka “mostly artists”) shouldn’t play with Python, I have the idea to deploy small tools as .exe files.
I’ll keep .py for DCC/Unreal scripts that don’t need an installed interpreter (since it’s bundled with the software).
So at the moment I’m thinking about how to store and deploy stuff: I’ve read a lot of old posts here and already talked briefly on slack about this topic. I know that some of you prefer a clean copy-paste of needed modules on user’s machine instead of EXEs, but I’m wondering about mixing the twos…
My idea is:

  1. A repository used for development only where to keep all python code (standalone tools, DCC scripts,…). Artists will not have this.

  2. Another repository (we already have it) that it’s basically an Unreal4 project with a library of shaders and blueprints that we cannibalize migrating assets to work projects to reuse good stuff developed. This one should be shared with artists since they have the possibility to store there useful assets they do while working (usually we briefly identify together the assets, then the artist documents it a little, gives it a proper name and pushes it)

  3. A shared folder where the developer (me) copies tested and completed scripts and tools. Here all dependencies and modules will be also stored.
    The idea is to write a setup tool (an executable) that once launched by the end user will do the dirty job of copying all the files in one or more local folders.
    This script will also create (the first time) system environment variables used by any other tool/script to access these local folders.
    So for example an unreal python script could import Qt adding a python path read from a ‘TOOLS_PATH’ env variable and so on.
    This should allow me to avoid users (and me) the task to keep their python installation and package folder up to date: each time I’ll release something new they only need to re-launch the script.
    I remember that @Theodox told me somewhere about this and probably @bob.w sponsored me the idea of give users not-exe-packaged stuff where possible.

Do you think this could be a good plan? :slight_smile:

** EDIT **
While chatting on Slack where pointed out that it’s better to avoid .exe files for tools except for one or two used to setup the environment and install stuff. As Dhruv suggested, many tasks could be done by simpler batch files avoiding .exe for that purpose too.
All other tools will require than that the user have his/her own local Python installation: all packages and modules should be distributed using a clean folder where files are copied, without messing up with system package library.

At my studio we set up central Python virtual environments for Mac/Linux/Windows and we enforce our script to use them for everyone. This way we can be certain that no one would miss any required modules. All scripts also gets called from server to make sure everyone gets the same thing.

Having to take care of modules (install, update, versioning, etc) on every workstation would be too much hassle and too prone to errors.

On Windows we deploy bat file which points to Python.exe and scripts on server path.

How large is your team? I can imagine running all users off a single remote machine could create a bottleneck in certain situations.

It’s an idea I considered but since our servers are, well, not exactly great machines I dropped it.
Moreover I’ve read many times that in theory virtual envs are more for development not for official releases… Of course any of us can make his/her own decision, but as inexperienced python dev I prefer to not take non-standard routes :wink:
Your system surely simplify maintenance, but asked by Bob doesn’t this cause bottlenecks or issues in case of net problems? Users are always bounded to the server status.

About 200. Apart from the few seconds at script launch we don’t have any bottle neck on that side. I would say scripts are tiiiiiny compare to all other traffics going on in our network.
Those that do coding do manage their own local virtual environment based off the same requirement file.

My feeling is that virtualenvs are really a developer tool.

They’re vital for developers because they let you work in isolation: it works on my machine! syndrome is almost always the result of you relying on something without realizing that you need it – and then shipping tools off to people who don’t have it, whatever “it” is. If you work in a virtual env you can be 100% sure you know what’s there. I’ve probably got 20 virtualenvs on my machine right now – but when I send stuff to users it’s a plain of bucket of files. I’m pretty leery of actually distributing them to non-technical users although you could probably manage a system that spun them up from scratch using bat file or the like.

So, I love virtualenvs as a disciplined way to track what you actually need for your tools. But, for users, I prefer them a single, complete environment that they don’t have to touch. Most artists don’t want to manage a python ecosystem and thus are not very good at it – and often the ones who are interested in trying to do it end up breaking your stuff by doing something that seems innocent, like installing a package which breaks your tool on that one machine and isn’t visible without hours of version-sleuthing.

I’m currently working on a method for building a complete environment using pipenv on the developer side and a build script that takes a vanilla Python environment, clones it, and then uses pipenv and Pipfiles to build a working distribution. That whole thing gets shipped to users so we have (in theory!) completely self-contained, 100% reproducible environments. However I have the luxury of relying on an inhouse tool for distributing the bits, so I don’t have to try to be economical about transfer rates and so on.

The main hassle factor so far has been that pipenv – like any virtualenv based strategy – depends on developers creating fully specified packages, like what you see on the PyPi . That means you have to wrestle with the horror that is setup.py, which is a pain in the ass. I’m still experimenting to find the right granularity for the pieces – I like doing lots of smaller projects so that the version control history is clear and useful but the setup.py tax may push me towards a smaller number of bigger chunks.

The “right” way to get the actual bits into the hands of users may really be something you want to negotiate with your IT folks instead of solving it on your own – they may have tools or options you don’t know about. It’s certainly worth asking. The mechanics are really not too important, though – the big thing is to decouple the way you get the bits to your users from their user experience. Whether it’s a .bat file or a launcher application or something built in to another tool, you want the average user’s experience to be simple and convenient (and very hard to bypass!) – if it becomes too time consuming or too manual they will quietly opt out, and then you’ll spend lots of time debugging a ‘problem’ that you know is solved before you realize that user X is on a two-month-old version of the tool.

2 Likes

Hi!
So after some days and a bucket of good advice I managed to build a very rough pipeline for packing and distribute my tools to other people here at the studio.
Please note again the word rough :sweat_smile:
I have a virtual environment for developing and a release_lib.py script that I (and only me) call to package stuff.
It basically:

  1. Creates a release_timestamp folder

  2. Uses pip freeze to generate an updated requirements.txt

  3. pip install the requirements in a fresh new folder under release’s root

  4. Copy specific folders and files from the development subtree to the release folder (tools, custom packages…)

So in the end I should have this release folder ready to be stored on a shared folder on our server.

On the other side the user have to launch a setup_lib.exe that it’s just a python executable: I know that we discussed about using a bat file instead of a compiled exe, but since I wanted to use this moment to check for some folder and variables too, I ended up with this solution.
The setup_lib asks for 2 folders, on being the place where to put the library. It checks first for an environment variable: if present it prompts for a setup in a known location, if not the user (or the install-man) must tell where with a simple browse folder dialog.
When given the ok, the script:

  1. Updates the environment variable.

  2. Check for a python installation (I rely on standard python installer) and if not present OR if it isn’t the required version launch the installer (stored on server)

  3. Copies all stuff from the shared folders to the desired place

  4. Creates/updates PYTHONPATH env variable to make it include both needed site-packages and custom-packages. No other package is installed in system python.

I’ve tested the proper functioning of a couple of tools and it seems ok… I’m sure there is a huuuuuge space for improvements, but it’s a comforting beginning :slight_smile:

Months passed… daily work kept me away from my project, that languished on in my PC…
But now I can dedicate a little bit of my time to it again.
We just finished 3 projects and I tested my project structure and pipeline, finding some problems.
One of them was the distribution and updating of Python scripts: my actual system simply doesn’t work.
I have a scripta that copies stuff from my dev project and create a zip file… then users must launch a .bat that setup packages locally… But a local installation of Python is needed on their machines, and some of them already have it (but different version).
So it was a mess and in the end only artists could benefint of a couple of tools I wrote.
I need another iteration on it, and this time I’ll try something different.
And I’d like to introduce rez, as some of you told me in the past.
This will help me a lot in dealing with all different softwares and their different python versions (3dsmax, blender, unreal…)
I’m not sure if this could be the right problem to solve at first but I’m asking myself:

1- install rez on every PC and distribute packages locally to all people
or
2- install rez on every PCbut keep packages on a server
or
3- distribute to users the bare minimum and use bat files for resolve rez environments on the server (so install rez only on server machine)

Last option seems the worst to me…
My feelings are that secondo option could be the best one, but I really don’t know how to setup my machine in order to work on it… I only have a virtual env with all my packages and scripts.

What do you think about it? I’m going in the wrong direction again? :slight_smile:

Rez could solve this problem, with either of your 3 options.

  1. Has the advantage of local run-time performance, if your network is slow, at the cost of (1) disk space and (2) keeping every machine up to date. Two potentially significant challenges.
  2. Is the most common approach
  3. This is a little extreme, but is a common-enough approach for a render.
  4. Rather, 2.5, is distributing both packages and Rez on a server, including Python. That way, machines only need network access. However, this comes at a cost of run-time performance as with (2), especially on Windows due to inferior small-file access to Linux and command-line use, as every command would boot a new instance of Python being read off of the network.

To get started on 2, here’s something you can try.

# cmd
pip install bleeding-rez  # Using Python 2 or 3
set REZ_PACKAGES_PATH=\\server\packages
set REZ_LOCAL_PACKAGES_PATH=\\server\packages
rez bind --quickstart
rez env python
> $ exit

Replace set with export on Bash and $env: with PowerShell

Type exit to leave the “context”, which is a subshell

In order:

  1. bleeding-rez is a Python 2 and 3 and Windows compatible version of Rez on PyPI, but the same applies to the GitHub version
  2. REZ_PACKAGES_PATH is akin to PYTHONPATH and is where Rez looks for packages. Per default, it’ll look in your ~/packages directory.
  3. REZ_LOCAL_PACKAGES_PATH is where rez bind will install a few “default” packages, like python
  4. rez bind --quickstart installed these packages, based on software you have on your local machine. Down the line, you’ll create and release these yourself, but this can help you get started (quickly, hence the name).
  5. Finally, rez env python will establish an environment with Python available.

Now have a look at your \\server\packages directory for what options you’ve got available to rez env.

Reproducible Environment

If you take a closer look at the Python package installed, you’ll see it merely references your local install. That’s a recipe for “Works on My Machine” and is no good.

You can guarantee a truly reproducible environment by making a proper Python package, one that doesn’t rely on your system install. Here’s one way of doing that.

  1. Copy your local c:\python27 folder to c:\dev\python\python27
  2. Make sure it runs (some distributions can require you add additional DLLs to the folder)
  3. Make a package out of it.
The Nitty Gritty

python2\package.py

name = "python"
version = "2.7.14"
build_command = "python {root}/install.py"
private_build_requires = ["python"]

def commands():
  global env
  env["PATH"].prepend("{root}/bin")

python2/install.py

import os
import shutil

root = os.path.dirname(__file__)
build_dir = os.environ["REZ_BUILD_PATH"]
install_dir = os.environ["REZ_BUILD_INSTALL_PATH"]

# On calling `rez build`
print("Building package..")
shutil.copytree(
  os.path.join(root, "python27"),
  os.path.join(build_dir, "bin")
)

# On calling `rez build --install`
if int(os.getenv("REZ_BUILD_INSTALL")):
  print("Installing package..")
  shutil.copytree(
    os.path.join(build_dir, "bin"),
    os.path.join(install_dir, "bin")
  )

Finally, build and install the package.

cd c:\dev\python2
rez build --install --clean

Now the python package is available in a shared location that doesn’t depend on your local install. So anyone calling…

rez env python

Is guaranteed to see the same environment.

However there are a few issues with the above approach…

  1. Manually packaging off-the-shelf software is a pain
  2. Your initial c:\dev\python2 is your development package, you will likely want to version that to enable package updates and rebuilds, but versioning binary files is never any fun. Especially ones that are identical to what you’d find elsewhere online.
  3. Python has a dependency to your OS, which means your development package must stay up to date with the version of OS currently installed (as “variants”). Which is likely why the above may have errored with a message about Attempting to install a package without variants.
  4. Any Python modules/packages you had installed into that version is now permanently part of that (shared) package, and that’s no good. (They should be individual Rez packages too).
  5. There are likely DLLs missing from that particular Python package. Some DLLs are expected to be found in your c:\windows\system32 directory, but only if the relevant Visual Studio Redist has been installed on that machine. A recipe for “Works on My Machine”.

To work around those issues, I put together a wrapper around an existing package manager that turn off-the-shelf packages into Rez packages, called rez-scoopz.

git clone https://github.com/mottosso/rez-scoopz.git
cd rez-scoopz
rez build --install

From there, you can install scoop packages as though it was a Rez package.

rez env scoopz -- install python27
rez env scoopz -- install python  # 3.7
rez env scoopz -- install git
rez env scoopz -- install 7zip

The same principle then applies to turning packages from PyPI into Rez packages, with rez-pipz

git clone https://github.com/mottosso/rez-pipz.git
cd rez-pipz
rez build --install

Such as…

rez env pipz -- install PySide2
rez env pipz -- install Qt.py
rez env pipz -- install pyblish-base pyblish-lite

From there, you only need to make your own packages for your own software, and can establish a reproducible environment like so:

$ rez env qt.py pyside2 pyblish_base git python-3.7
resolved by manima@toy, on Mon Jul 15 09:12:02 2019, using Rez v2.35.0

requested packages:
qt.py
pyside2
pyblish_base
git
python-3.7
git
~platform==windows  (implicit)
~arch==AMD64        (implicit)
~os==windows-10     (implicit)

resolved packages:
arch-AMD64            C:\Users\manima\packages\arch\AMD64                                        (local)
git-2.21.0.windows.1  C:\Users\manima\packages\git\2.21.0.windows.1\platform-windows\arch-AMD64  (local)
platform-windows      C:\Users\manima\packages\platform\windows                                  (local)
pyblish_base-1.8.0    C:\Users\manima\packages\pyblish_base\1.8.0                                (local)
pyside2-5.12.3        C:\Users\manima\packages\pyside2\5.12.3\platform-windows\python-3          (local)
python-3.7.3          C:\Users\manima\packages\python\3.7.3\platform-windows\arch-AMD64          (local)
qt.py-1.2.0b3         C:\Users\manima\packages\qt.py\1.2.0b3                                     (local)
shiboken2-5.12.3      C:\Users\manima\packages\shiboken2\5.12.3\platform-windows\python-3        (local)

Application launcher

Finally, a little (more) self-promotion, I’m writing more of these kinds of examples in the docs for allzpark, an upcoming application launcher that uses (bleeding) Rez under the hood.

Hi Marcus!
Great reply! :slight_smile:

As said on Slack, I’m trying to implement your solution.
My first attempt used nerdvegas rez and I had problems with something not being a reparse point.
But you said that bleeding-rez solved that, so let’s try again sticking to the plan :wink:
I’ll work on it a little more before holidays! :smile:

So…
I’m continuing to explore rez and I’m slowly putting pieces together.
I leave here my thoughts and tries as future memory of my failure :smiley: :smiley:

One of the issues was that rez needs an installed python interpreter to execute, and people around here sometimes already have one or more random and unpredictable Python versions installed…
So the simplest solution:
-Install Python
-Install rez
seemed to me a little impractical, unless taking the risk to break something on people machines (like mess with PATH or python version or whatever)
So I tried to find a way to distribute some stand-alone interpreter.
As Marcus and others rightly spotted, this has some problems.
I could copy/paste my local Python37 (I choosed to move to Py3) everywhere and run it and also distribute an interpreter for the Python Embeddable Zip. (which I preferred because it’s very light and I need it only to boot rez, after all)
Unfortunately rez needs to be installed with pip… and compiled stuff don’t works… local path to my pc is hardwired somewhere.
As said by user Blazej on rez slack:

You might be out of luck. Python, when it creates the shims (.exe) is encoding absolute paths. So these guys are not relocatable

I had in fact this:
Fatal error in launcher: Unable to create process using '"d:\portablepy\python.exe" "F:\portablepy\Scripts\rez.exe" bind'

So after some tries I ended up with this approach, that originates from some guesses, various users advice and this page:
https://stackoverflow.com/questions/42666121/pip-with-embedded-python

  • Created a mapped drive T: on my machine and then placed there unzipped embeddable python.
  • Uncommented the import site statement inside python37._pth file
  • Downloaded get-pip.py file from https://pip.pypa.io/en/stable/installing/ and placed it in the same mapped directory of the interpreter
  • Run get-pip with the embedded python
  • Run newly compiler pip to install bleeding-rez

Then, I quickly tested python and rez on another machine by copying the folder there and mapping it to the same T: drive to avoid the error above.

At the moment this hack seems to work for me: it isn’t the optimal solution I suppose, but seems to be fast and light enough for my small needs.

Now I have to face how to properly structure on my machine the dev project, local rez and finally how to use rez to release packages.
Project structure confuses me because I already have it but it’s more a collection of many different packages and modules, not a specific tool.
I’d like to setup a Git like any of you do, but I often see that a Git project is more related to one specific tool.
Have to learn more…

Hi,
with the actual sanitary crisis, i’m look any way to dev./mount an flexible pipeline solution ; have you any news about your work ? It’s really interesting to read your tech idea and all issue you can discovers with it.

Hi stilobique! Sorry for delay but for some reason I didn’t received any notification of your reply.
Actually I’m quite stuck with my project: studio daily tasks and various stuff forced me to stop developing this for now. However I’m continuing to gather informations and inspirations here and there.
We’re also in the middle of switching our version control system (form P4 to Plastic) and this will require some additional investigations.
But this pipeline project is definitely I want to develop sooner or later.
While for now, with COVID problems, we’re all working from remote: we mostly connect to studio machines with some remote desktop. We also placed a couple of smaller Unreal projects on Cloudforge and use svn for sync with local copies, while keeping heavier assets files on a Google Shared Drive.

(updated with more details on developer files)

Hi folks!
Months passed again, but this so called projects is still alive here!
It’s something I’m slowly updating while doing real production at our studio, but I’m now at a point where I can say that we have some pipeline of sort at last! :slight_smile:

All this stuff is done thanks to the community! In particular I bothered a lot @bob.w, @Theodox,@instinct-vfx, @Dhruv_Govil, and @Minkiu, the last 3 in particular for Rez and add-on management!
Many thanks to all of you!! But also to all the people that give me hints and ideas on Slack and I can’t mention here because of…memory :grimacing:

Project Folder Structure

I fought my personal battle with associates programmers trying to convince them to use some sort of standard structure for projects and in the end we found a common point and an agreement on this.

Now a standard UE4 project here is

Main folder: *ProjectName*
|
|--Engine (if we use a custom compiled version of the engine, it will stay here)
|
|--GameProject (conainsi the Unreal project, so the COntent, and all the C++ game source code)
|     |
|     |-- Content
|          |
|          |-- ProjectName
|               |
|               |--Core
|               |--Level Sequences
|               |--Maps
|               |--*All assets folders*
|
|--Raw (production and intermediate assets files)
     |
     |--Developers (like UE4 calls it, personal testing files, untested stuff and so on)
     |     |--developername (a folder for each dev)
     |
     |--Work (source files like .blend, .MAX, .psd, ecc)
     |
     |--Import (files to be imported in engine like .fbx, .wav, .png, .tga, ecc)

Assets folders

The folder structure under Work and Import is the same and given some rules is project specific. But it has to be the same so our Export/Import scripts can easily save files into the correct place.
Speaking about assets, the Content\ProjectName too share the same subfolder tree.
So the idea is that once it’s decided where an asset must go, we just create its source file (like a blender scene) in the proper place under Work. The exporter automatically writes the fbx in the same position but under Import… and the same subtree will be recreated under the Content\ProjectName folder by the engine importer.
The same applies for Developer files: only difference is that they ends in specific subfolders of Conten\Developers.
We’ve lost some minor advantages to me in not keeping source and intermediate files together… and speaking of UE4, without having the fbxs near the correspondent uasset, but I greatly prefer this clear organization.
The downside is that if an asset must change folder for some reason, we have to change it up to 3 times.

Internal tools

At the moment we model with Blender and use UE4.
So I wrote:

  • a Blender exporter (see dcc-kit critique post about this, if you want to). It’s based on an agreed-with-artists scene hierarchy to organize the assets and control how they’ll be exported, with what name and where.
  • an Unreal importer: this allows artist to import single files, folders or folder trees inside the engine. By default it tries to mirror the folder structure and recreate it under the Content, instead of relying on artist’s (an me) care with folder names. It also automatically import developer assets in the proper place.
  • an Open source file for UE. This works in combination with the Blender exporter, that writes some metadata inside the FBXs. This small tool (not fully tested yet) opens the source .blend of an uasset directly from UE. Quite nice if you have to edit a mesh and to avoid picking the wrong file.

For now is going fine: of course all of this is in a very early stage…

Managing all tools: the Rez way

In the end, after a lot of tests, I went for the Rez way.
I tried both bleeding-rez and vanilla rez, deciding for the latter in the end.
I think that the tool is great but it has a quite steep learning curve for people like me that are really new to that kind of environment management world. For the records, some more introductory docs will be really helpful to spread the tool :slight_smile:

A problem here at our studio is that since we don’t force too much sw on machines, users could have any Python version installed, added to Path or not.
And it’s true that rez could resolve a Python package to use in an environment, but it still need a Python to work.
This was my biggest problem.
I wanted something portable-like, a standalone rez of sort.
Possible?
I found a rough solution with WinPython.
The idea is this:

  • We agree on a unit letter to be used four our tools… and it’s *T:* (oh, we’re on Windows!)
  • I remap a local tool folder to T: with subst command
  • I unzip WinPython 3.7.4 in that folder and use that interpreter to execute the rez setup (under that mapped unit again).
    Doing this, all rez stuff will be permanently bound to that unit and that portable Python interpreter.
    Packages folders (local and released) are set in a rezconfig.py file and it’s location added yo user’s env vars.

At this point it’s possible to copy/paste this folder to any machine, any folder… map this folder as T:, set the env key for the rezconfig file and it’s done. Without install and without worrying about system Pythons.

All packages will reside on a remote folder and not on user’s machine: users only have the Python,arc,os,platform packages.

I wrote a script called redrez (redistributable rez) that basically does this whole setup on a new machine, given some destination folders.

It’s still an ongoing process, as I said, but I’m now seeing something working and it’s already a small victory! :smiley:

4 Likes