Pipeline design patterns

Hello,

I am trying to get a fundamental understanding on complex version management in pipelines.
No matter how many times I read http://www.pipelinepatterns.com/theory/shapes.html I never come close to understanding it.
I am stuck at the internal consistency section U-Shapes in Parallel / Workspace Local Assets.

Question 1)
http://www.pipelinepatterns.com/zsvg/shapes_parallel_no_tooling.svg
Here an example:
1 - Geo artist checks in 1st modeling revision
2 - Texture Artist checks out alembic representation of 1st revision and starts working
3 - Because of parallelism in pipelines the modeler continues to work and makes new commits
4 - textureing gets to the point of checking in their work. they still reference an older revision of the geometry for which they would be able to make a consistent commit
However: Polson describes a pipeline where you always build the entire asset, so texturing has the option to:
a) build the consistent asset by building the asset with checking out the currently present geo revision
b)

the Geo Artist checks out, modifies, and exports the Geo, but in addition, checks out (read-only) and exports the Tex. the Tex Artist does just the opposite.

Which I interpret as: before the texture artist is ready to commit, he again makes a geo check out used for building the most up to date asset, but this seems to be contradiction to what is said above as

One common way to achieve internal consistency is for each artist to fully export the Local Asset, just to make sure their latest revisions are in sync with each other.

The way I understand it,
a) would lead to consistent asset, but to what I’d compare to a lost update in databases, as they override the head asset revision with older geometry. Plus now the geo head revision of source control is out of sync with the asset control
b) has the advantage that source control and asset control remain in sync / the asset is always in the latest possible state likely to be inconsistent.

I know the relation between geo and text might be a bad example here as I image them to work more in a linear fashion than parallel, but I took it as the example because he did.

Question 2)
In U-Shapes in Parallel he mentions

Developing components in separate workspaces creates some workflow difficulties.
Versionitis. Our artists can easily check in incompatible components. The asset will be internally inconsistent. They will need to carefully synchronize their checkins, of both the source and run-time data.

In this example he shows a different pipeline approach where each component commits only their work instead of building the entire asset. How is this a inferior design with regards to syncronisation and inconsistenies? I dont see how the full asset build solve any of these issues.

I am pretty sure I am misunderstanding something, so any help is greatly appreciated. Thanks

1 Like

Has anyone any comments about this? The lack of pipeline resources on the web makes it nearly impossible to find information on stuff like this.

The pipelines site is awesome - from my understanding:

From Workspace Local Assets

In this scheme, (without additional tooling) each artist exercises both component workflows. In other words:

  • the Geo Artist checks out, modifies, and exports the Geo, but in addition, checks out (read-only) and exports the Tex.
  • the Tex Artist does just the opposite.

In this instance both domains, texturing and modeling check out and export each others domain assets making sure revisions are always in sync i.e.

Modeling checks out the geo, but also checks out the textures.
Modeling exports the geo, but also exports the textures.

and vice versa with textures…

This is essentially a first to the check-in wins methodology - you can get a lock issue happen though where one domain essentially bottlenecks another at the 11th hour. If domain B is dependent on domain A - if domain A checks out domains B’s work and exports it who’s to say its valid or needs revisions.

To Question 2

To address inconsistencies with domain assets is to introduce staging mechanism - i.e. domain A masters a version for downstream domains to pull into their work-in-progress.

This is far nicer than a push mechanic where an upper domain makes a change who’s dependent domains references - you end up with breaking changes that ripple down the domains. Essentially it creates domain constraint.

With staging there’s a buffer for domains to pull the safe changes from an up stream one, and can crucially test if the delivered upstream domains published asset is valid. If not they can push back for changes without breaking their existing safely published asset.

Essentially each domain is independent, with downstream ones pulling only safe published changes from upstream ones. Who in turn publish a safe asset for their domain.

model -> wip -> publish -> staging -> rigging -> wip -> publish -> staging -> animation…

This way you can always have the most stable, but possibly the most recent. Im pretty sure the site has info on staging, i’ll see if i can find stuff on it.