Keeping edges indexes same during mesh recreation with Maya API

Hello guys!
No one had a problem, during polygon mesh recreation with Python API edge indexes are recreated in the wrong order?
(vertex and face indexes are all correct and mesh recreated correctly)

Mesh recreated using PythonApi and MFnMesh.create() function:

meshFn.create(vertex_count, polygon_count, vertex_positions_mfloatpoints, each_polygon_vertex_count, polygon_connects)

As in Maya docs written:
Each face is described by a number of sequences of integers—each integer representing an edge id. The first sequence of edges represents the boundary of the face. So if i understanded correctly, proper order of vertex ids per poly will solve problem.

If i understand correctly a have problem with order of polygon_connects data, which I’m reading from mesh,
currently i retrieve it this way(it is vertex_ids_per_poly):`

polys_iterator = OpenMaya.MItMeshPolygon( m_object )
while not polys_iterator.isDone():
        verts = OpenMaya.MIntArray()
        polys_iterator.getVertices(verts)
        vertex_ids_per_poly.extend(verts)

Maybe someone had some similar problem?

Instead of working with edge indices, your code will be much more stable if you convert the edge indices to pairs of vert indices. I know that’s probably not what you want to hear, but there are reasons.

In the count/connect representation of a mesh, edges are implicitly defined. Also they’re not stable through most exports. Export a mesh to .obj and re-import it. If the mesh was built in maya, then the edge indices might not match… Or they might. It’s very much not a guarantee. (I just tested this on one of my own meshes, and the edges got reordered)

This is because Maya uses a completely different representation of meshes in its .ma/.mb files.
In there, instead of having count/connect arrays, they take vertices, and explicitly build edges from pairs of vertices, then they build faces from loops of edges. That means that the edge index when saving to .ma/.mb is explicitly defined. And since we don’t have access to whatever function builds a mesh with that representation, we get no control over edge ordering.

Finally, rather than using an iterator to get the faceCount/faceList arrays, it’s easier to use MFnMesh.getVertices

Thank you for your answer!
Great tip about getVertices!

The problem is - why I’m so bothered with these edges id’s, that on this recreated meshes rigging department place rivet, which have edge id as a parameter,
and when a mesh is recreated edge id’s do not match and rivet jumps to another place in the mesh=(

What I got from your words, is that there is no way to recreate geometry with the same edge id’s, because of architectural reasons (with api and for now at least)?

Of course, I can make a workaround with this problem for such things, I just wanted to solve this in the first place=)

You understood correctly.

Also, in my personal opinion, a uv-based constraint would be better for rigging (like PointOnPoly, or if you’re in 2020, the Pin, or Proximity Pin). But that’s a whole other discussion :slight_smile:

1 Like

Thank you for your time and knowledge, you helped me a lot!

Does maya have faces intrinsic attributes (local “uv” per faces which don’t need to be created beforehand)?
If so, do you know how to access them?

I mean, any 3 points can be built into a barycentric coordinate system, so there’s no need to explicitly define some intrinsic.

And I’m 100% sure that any mesh connection just processes input coordinates into barycentrics, then just weights between the 3 points of whatever triangle you’re binding to. Luckily (unless you’re moving your bind point around) this only has to be done once, and the cached values can be reused.

Controlling a connection directly via barycentric coordinates could be useful sometimes, but the manual process of inputting them would be incredibly annoying.

Generally, uv coordinates won’t change their “meaning” as easily, or as often as verts, faces, or edges. This is because it would be a real pain to have to repaint a texture every time the mesh changed. So once a map has been painted, we generally match our uv coordinates to the map instead of the other way around. This means that our coordinate that sits at the tip of the nose will stay at the tip of the nose so long as we’re using the same map. Then under the hood, it probably finds the uv triangle containing your sample point, figures out the corresponding mesh triangle (and caches that data), and uses barycentrics to apply new motion.

But the best way IMO is the ProximityPin because instead of sampling a uv point from a uv layout, it samples a 3d point against an orig mesh. And the overall pose of a mesh (especially once it’s made it into rigging) should be static. But if it’s not, then some easy constraints to guide joints turn it into a fully procedural setup that will require no numerical tweaking, or custom UV sets. You can sort of do this already by getting the uv coord from closestPointOnMesh node, then connecting that to a UV based constraint. But using the ProximityPin is easier and way more efficient when you’re doing multiple connections.

Also, I’m totally glossing over the crazy algorithms you have to use to find a containing triangle in UV space, or a closest point in 3d space in a reasonable amount of time.


In other words: Unless you’re doing something very specific (or just trying to learn!) then we have some great tools already that you should probably try to use before rolling your own solution.

1 Like

Thanks for your response

I was looking for a way to keep do that between rigged and renderable asset scenes, without the need to keep the same uv between those scenes.

Depending on the type of caches used, we can’t necessarily keep uv sets consistent (the current Abc plugin in maya don’t store uv set for instance).

A solution solely based on geometry would work without the need to export and load uv from one task to the other and simplify possible planning problems between riggers and shading artists: as long as the mesh remains the same, a it would always be perfectly placed in any scene.

It’s not just the maya alembic plugin. The alembic mesh schema only has 1 uv set in it. If you were feeling fancy, you could definitely store extra UV data in custom attributes in the .abc file, but that would probably require messing around in the alembic api. Definitely doable, but since it’s custom data, you don’t get any of the convenience methods. Then you’d have to write some way to read it.

But in your case, why can’t you take those locators you have pinned in the rigs and export them along with the geometry?

The idea here is to be able to recreate the pinned object anywhere in the pipeline with just the constraint information.

I was also curious about faces intrinsic values to read normals, uv and other datas: it’s something I use a lot in houdini and it saves a lot of time.

If you want to mess with it, I’ve got a script that gets the barycentric coordinates of an array of uv points on a mesh (technically mean value coordinates, which are a generalization of barycentric coordinates that works on polygons with >3 vertices) so you can pretty easily get the vert indices and weights to rebuild the pins. I’m guessing this is similar to how Maya’s transfer attributes tool works when the sample space is set to UV.

Once you have the barycentric coordinates, you could probably figure out the worldspace of your to-be-pinned locator for any one frame, then use one of the existing methods of pinning to stick it to your render mesh. Or you could even write your own node to pin directly with barycentric coordinates. That probably wouldn’t be too difficult.

I still think that just including the extra locators in the alembic would be easier, but that’s because I already do it for my pipeline and I don’t know yours :slight_smile:

You’re awesome, thanks a lot :slight_smile:

1 Like