 # Convert world space coordinates to object space coordinates in Maya?

Hi TA.org,

Is it possible to convert an arbritrary point space in world space into an objects local space?

Plenty of examples going from object space to world space but how do I go from world space to object space for a single point?

It really is as simple as multiplying the world space point by the object’s WorldInverseMatrix.
You’ll have to make sure to multiply in the right order though. The point’s world position comes first then the inverse matrix comes second.

Now, hopefully an explanation you can maybe follow. Feel free to ask more questions if you don’t.

Think of how parenting behaves:
You start from where your parent is in worldspace, then add your own transforms on top of that to get your own world space.

In maya, you multiply matrices like this to say the same thing:
`WorldMat = LocalMat * ParentWorldMat`
You sometimes see this called “post multiplying” because the parent world matrix comes after the child, and order matters when multiplying matrices. (Some other softwares use pre-multiplying. Maya uses post.)

Looking at that equation like you’re in algebra class, how would you isolate the LocalMat?
You’d just divide both sides by ParentWorldMat, right?
But we can’t do that because these are matrices and there’s no such thing as “dividing” … but there is multiplying by the inverse.
So we can’t divide by ParentWorldMat, but we can multiply by the ParentWorldInverseMat (aka. The parentInverseMatrix plug in Maya)
But again, these are matrices. We can’t rearrange them after the fact. We have to pick: put the ParentWorldInverseMat on the right or the left.
I hope you see that the right side is correct like this:
`WorldMat * ParentWorldInverseMat = LocalMat * ParentWorldMat * ParentWorldInverseMat`

Because now you can see `ParentWorldMat * ParentWorldInverseMat`. And a matrix multiplied by its inverse cancels out!
So you’re left with the equation `WorldMat * ParentWorldInverseMat = LocalMat`
Which is exactly what you’re trying to figure out

5 Likes

Thank you so much for the explanation @tfox_TD!

I might be asking for the wrong thing, basically this is what I want:

I’m trying to axis snap according to an objects “Object mode” X | Y | Z axis independently but in code.
Like this:

A Y axis snap would give this:

Doing a world space snap along a world space axis in code is as easy as getting the Locator’s world space position, getting the Y value, and setting the Cube’s Y value to that.

But how do I snap along the Cube’s object space Y axis? (As shown in the pictures above)

Yeah, you’d convert the locator’s worldspace into your box’s local space. Then pull the y-coordinate of the locator and add it to the box’s current y-coordinate. You can’t just replace the y-coord, you’ve gotta add it.

1 Like

Haha my mistake
I tried that via the channel box and it didn’t work (adding the value, but in code it works)
Didn’t realize channel box displays values in world space

Thanks again Oh crud, I may have been wrong. I think what I said would snap a child of the box along the given axis, but not the box itself.
Double check me.

I think you may have to do this instead:

1. Get the locator in box space
2. Construct a matrix that is just the y offset translation. Call it `offsetMat`
3. Multiply `offsetMat * boxLocalMatrix` to get the new local matrix of the box
4. Decompose that matrix and set the box’s channelbox values

I think that’s what I did, it seems to work?

``````def get_pos_in_obj_space(obj, pt):
"""
Input obj and pt ws position as tuple
Return the input point position in the objects object space coordinate system
"""

def getWorldMatrix(obj):
return om.MMatrix(mc.xform(obj, q=True, matrix=True, ws=True))

# def getObjMatrix(obj):
#  return om.MMatrix(mc.xform(obj, q=True, matrix=True, os=True))

osel_world_inverse = getWorldMatrix(obj).inverse()

cust_matrix_row = (pt, pt, pt, 1)
pt_ws = om.MMatrix((
(1, 0, 0, 0),
(0, 1, 0, 0),
(0, 0, 1, 0),
cust_matrix_row,
))

ptpos_in_osel_objspace = (pt_ws * osel_world_inverse)

pt_os_x = ptpos_in_osel_objspace
pt_os_y = ptpos_in_osel_objspace
pt_os_z = ptpos_in_osel_objspace
return (pt_os_x, pt_os_y, pt_os_z)
``````

Later on I simply add the corresponding X Y Z to the objects existing X Y Z

I think that won’t work if both the box and its parent have arbitrary rotations.
Here’s a setup with a working function

``````from maya import cmds
from maya.api import OpenMaya as om

def get_pos_in_obj_space(obj, pt):
"""
Input obj and pt ws position as tuple
Return the input point position in the objects parent space coordinate system
"""

def getMatrix(obj, world=True):
return om.MMatrix(
cmds.xform(
obj, q=True, matrix=True, worldSpace=world, objectSpace=not world
)
)

osel_world_inverse = getMatrix(obj).inverse()
osel_local_mat = getMatrix(obj, world=False)
# Don't have to use matrices if we don't need rotations
# Get the point in the object space
pt_ws = om.MPoint(pt)
pt_os = pt_ws * osel_world_inverse

# aligning the Y axis means zeroing out the x and z
pt_os = 0.0
pt_os = 0.0

# Get the aligned point in the parent space of the osel
new_osel_vals = pt_os * osel_local_mat

# These are the new values that can replace (not add to) the old transform values
return tuple(new_osel_vals)

loc1 = cmds.spaceLocator()
loc2 = cmds.spaceLocator()
loc3 = cmds.spaceLocator()
loc4 = cmds.spaceLocator()

cmds.parent(loc4, loc3)
cmds.parent(loc2, loc1)

cmds.setAttr(loc1 + ".t", -2, 1, -0.75, type="double3")
cmds.setAttr(loc1 + ".r", 18, -14, 5, type="double3")

cmds.setAttr(loc2 + ".t", -3, 3, 0, type="double3")
cmds.setAttr(loc2 + ".r", 39, -4, -39, type="double3")

cmds.setAttr(loc3 + ".t", 1, 0.5, -0.2, type="double3")
cmds.setAttr(loc3 + ".r", -22, -20, 0, type="double3")

cmds.setAttr(loc4 + ".t", -3.2, 3.4, 1.5, type="double3")
cmds.setAttr(loc4 + ".r", 12, -12, 3, type="double3")

pt = cmds.xform(loc4, query=True, translation=True, worldSpace=True)
newPt = get_pos_in_obj_space(loc2, pt)

# Edited to actually update the position of the locator, rather than print it
# Set the loc2 translation to the new point that is snapped along the y axis
cmds.setAttr(loc2 + ".t", *newPt, type="double3")

``````

This is the expected value for `loc2.t` after the command aligning its y-value with loc4
I found this by doing a Y-align in the UI, not by running the function.
`[(-2.7950313411570025, 3.2845420690896105, 0.2828322955791683)]`

1 Like

Hm…I’m confused.

I just want to vertex snap along an objects local Y axis to some position in 3d space. I can’t get your latest code to work to do that?

But the code I posted works.

You know how the vertex snap works with the move tool? When the Y handle is active you can snap to a point in 3d space in object mode? That’s all I’m trying to do, but in code.

What I posted seems to work I think? I’m confused now haha.

Like this:
Before

After

I think your code does something more complicated?
The code I posted above is fine isn’t it? (for my use case??)

It’s only a little more complicated because it’s more general. My code will work if the object you’re snapping has a parent, or has any orientation. In your example, it looks like nothing has rotation values.

Also please note that I just updated the last 2 lines of my example in the previous post to actually perform the snap instead of just printing the new position.

Mine starts here
(This is what the scene would look like if you ran the latest code except for that last `setAttr` line)

But then running that last line, does this

See how my locator’s XZ plane is aligned with that locator in the background? It’s the same position you’d get if you held down the V key and dragged to that locator.

1 Like

Yeah your new code also works perfectly! Sorry for the trouble @tfox_TD

I was using xform but if I use setAttr like you did in your script your code works also.

Just to clarify though…
Aren’t both functions doing the exact same thing?

My function is exactly the same up to this point:
pt_os = pt_ws * osel_world_inverse

My code:
ptpos_in_objspace = (pt_ws_matrix * obj_world_inverse)

I add just the Y axis later on

Where as you do this:
pt_os = 0.0
pt_os = 0.0
# Get the aligned point in the parent space of the osel
new_osel_vals = pt_os * osel_local_mat

The only difference is the above line, isn’t it?

As long as this below is being done in both functions:
pt_os = pt_ws * osel_world_inverse

Aren’t both functions pretty much doing the exact same thing?

It’s my lack of understanding that’s driving me a little nuts haha

But if they are doing the same thing, why can’t I use xform with you code (replacing not adding)?

I don’t think so. The difference in my head is that yours is getting the offset in local space, where mine is getting the offset in parent space. The attribute values in the channelbox move the object in parent space, so that’s the space you need to use to do this in a “pure” form. (By pure, I mean that since I’m setting the values directly, this is something I could do in a plugin with the api where I don’t have access to xform)

Could you post how you’re using xform? I want to test your code with my locator setup to see if they match.

Yeah I’m using the api just to query information, and rely on xform for all my modeling tools because it seems to automatically take care of undo in a single step.

Try the code below, uncomment the last couple of lines at the bottom (your section vs mine), they seem to give the same results, but I don’t get why. I’ve never used setAttr, always been using xform for a while now.

Not sure why they cant be interchangeable for your function?
EDIT: As you said, setAttr refers to the parent space of the object, hence why it wont work with xform and my function

``````  from maya.api import OpenMaya as om

def get_pos_in_obj_space(obj, pt):
"""
Input obj and pt ws position as tuple
Return the input point position in the objects local space coordinate system
"""

def getWorldMatrix(obj):
return om.MMatrix(mc.xform(obj, q=True, matrix=True, ws=True))

# def getObjMatrix(obj):
#  return om.MMatrix(mc.xform(obj, q=True, matrix=True, os=True))

obj_world_inverse = getWorldMatrix(obj).inverse()

pt_matrix_row = (pt, pt, pt, 1)
pt_ws_matrix = om.MMatrix(
(
(1, 0, 0, 0),
(0, 1, 0, 0),
(0, 0, 1, 0),
pt_matrix_row,
)
)

ptpos_in_objspace = pt_ws_matrix * obj_world_inverse

pt_os_x = ptpos_in_objspace
pt_os_y = ptpos_in_objspace
pt_os_z = ptpos_in_objspace
print((pt_os_x, pt_os_y, pt_os_z))
return (pt_os_x, pt_os_y, pt_os_z)

def tfox_get_pos_in_obj_space(obj, pt):
"""
Input obj and pt ws position as tuple
Return the input point position in the objects parent space coordinate system
"""

def getMatrix(obj, world=True):
return om.MMatrix(
mc.xform(
obj,
q=True,
matrix=True,
worldSpace=world,
objectSpace=not world,
)
)

osel_world_inverse = getMatrix(obj).inverse()
osel_local_mat = getMatrix(obj, world=False)
# Don't have to use matrices if we don't need rotations
# Get the point in the object space
pt_ws = om.MPoint(pt)
pt_os = pt_ws * osel_world_inverse
print(pt_os)

# aligning the Y axis means zeroing out the x and z
pt_os = 0.0
pt_os = 0.0

# Get the aligned point in the parent space of the osel
new_osel_vals = pt_os * osel_local_mat

# These are the new values that can replace (not add to) the old transform values
return tuple(new_osel_vals)

loc1 = mc.spaceLocator()
loc2 = mc.spaceLocator()
loc3 = mc.spaceLocator()
loc4 = mc.spaceLocator()

mc.parent(loc4, loc3)
mc.parent(loc2, loc1)

mc.setAttr(loc1 + ".t", -2, 1, -0.75, type="double3")
mc.setAttr(loc1 + ".r", 18, -14, 5, type="double3")

mc.setAttr(loc2 + ".t", -3, 3, 0, type="double3")
mc.setAttr(loc2 + ".r", 39, -4, -39, type="double3")

mc.setAttr(loc3 + ".t", 1, 0.5, -0.2, type="double3")
mc.setAttr(loc3 + ".r", -22, -20, 0, type="double3")

mc.setAttr(loc4 + ".t", -3.2, 3.4, 1.5, type="double3")
mc.setAttr(loc4 + ".r", 12, -12, 3, type="double3")

pt = mc.xform(loc4, query=True, translation=True, worldSpace=True)

# # newPt = tfox_get_pos_in_obj_space(loc2, pt)
tfoxnewPt = tfox_get_pos_in_obj_space(loc2, pt)

dnewPt = get_pos_in_obj_space(loc2, pt)

# uncomment this section to test your code vs mine
# # Edited to actually update the position of the locator, rather than print it
# # Set the loc2 translation to the new point that is snapped along the y axis
# mc.setAttr(loc2 + ".t", *tfoxnewPt, type="double3")

# -----------------------------------------------------------------------------
# My function treats the new location as a delta vector from the current
# local space position of the object to be moved
# In the case of a Y snap I just add the Y component of dnewPT to the current
# local space position of the object to be moved aka locator2

loc2pos = mc.xform(loc2, query=True, translation=True, objectSpace=True)
movevector = (loc2pos, loc2pos + dnewPt, loc2pos)
mc.xform(loc2, os=True, t=movevector)
``````

p.s.
How do you post code with syntax highlighting on here?

p.p.s
Come to think of it my function is more a delta vector generator, its not the input points position in the object’s local space, it is ONLY a delta vector, to get the true location of the input point in the object’s local space, I would add ALL XYZ values of the point to the object’s current local space position (which I was doing in the case of an XYZ snap) .

1 Like

Ohhh, I get it now. Xform uses your delta directly in object space, and the end of my function (where I translate into parent space) is recreating what xform with objectSpace=True does under the hood.

And no idea how to get the syntax highlighting. I don’t do anything more than type triple-backticks around the code I post.

1 Like

Hi @dive, @tfox_TD,

I wanted to have a go at this - pseudo code (not tested):

So moving along an objects axis is in in object space (based on the tool settings) - this is not reflected in the channel box (as @tfox_TD explained) which is the coordinate space the object lives in i.e. its parent space or world if it has no parent. I always work in one coordinate system before converting into the objects space (this helps my brain from exploding). What I’m doing is firstly computing the targets world transforms relative to the source. This is you object-space offset.

I then create a new clean transform with just the position part of the axis you want to align - directly pumping this value in to the channel box on the source will not be battle-hardened and only work if it has no parents. So I first multiply this transform by the world transform of the source transform to essentially compute an offset in object-world space.

Finally this new computed world transform needs to be applied relative to the source objects parent space (regardless of it not having a parent - it’s gunna be more robust). Lastly i set the the source objects translation to the new transforms position part - 4th row vector.

What this means - the channel box may very well get values on all three axes of the translation. But we’re snapping to the supplied tool-modes axes.

``````import maya.cmds as cmds
import maya.api.OpenMaya as apiOM

def snap_pos_axes(src, trg, axes="y"):

"""Snaps the source's position to a target
along its object-space for given axes.

:param str src: The source object.
:param str trg: The target object.
:param str axes: Axes you want to align
in object-space e.g. 'x', 'xy', 'xyz'.

"""

# Get objects transform in world space.

src_xfo = cmds.xform(src, q=True, ws=True, m=True)
src_mat = apiOM.MMatrix(src_xfo)

trg_xfo = cmds.xform(trg, q=True, ws=True, m=True)
trg_mat = apiOM.MMatrix(trg_xfo)

# Compute the targets local transform relative to
# the source.

local_tm = trg_mat * src_mat.inverse()

# Build a local offset matrix.

new_mat = apiOM.MMatrix()

for axis in axes:

index = "xyz".index(axis)

offset = local_tm.getElement(3, index)

new_mat.setElement(3, index, offset)

# Compute the new transform in object-space relative to
# the source's parent space.

parent_mat = apiOM.MMatrix(cmds.xform(
"{}.parentInverseMatrix".format(src))

parent_inv_tm = (new_mat * src_mat) * parent_mat

# Apply the offset in parent space.

for i, axis in enumerate("xyz"):

cmds.setAttr("{}.t{}".format(src, axis),
parent_inv_tm.getElement(3, i))

``````
1 Like

EDIT not a bug, just have to account for this (for my modeling tool):

Steps to recreate:
With @tfox_TD 's function

I basically didn’t account for “translation” of an objects pivot via only rotation. That’s what is throwing off the snap centers in the calculations.

I figured it out! Whats happening!

Check this out Make cube at 0,0,0

Move pivot to corner and rotate
(basically changing the center of the cube without changing the channel box values):

THIS IS THE ERROR that has been killing me for days haha

Workaround may be to store the rotation values, zero transforms, and do the calculations from there, because if I do that it never “breaks”

If not I get results like this:

Rather than

Define “breaks”.

Also, yeah, I definitely didn’t take pivot stuff into account. Personally, I really don’t like pivot offsets. It’s a needless complication, and extra calculations that aren’t needed more than 99% of the time. If they could create a TransformV2 object without all that extra junk that we could choose to use in rigs, I would be a happy man.

2 Likes

This is how I overcame my issue, obviously when rigging I assume people zero out / freeze / bake transform but when modeling anything goes?

Both your guys solutions are great, thanks for the input @tfox_TD @chalk

The “breaking” was regarding this: (I’m sure there’s a term for it?)

Imagine rolling a cube in +Z starting at 0,0,0 by moving its pivot and rotating it “forward”. This is what is throwing off the snap tool calculations.

It was hard to figure out what was going on haha when things would mysteriously not snap to the right locations while testing. It occurs when I’ve ‘moved’ an object via lots of random rotations / pivot moves.

For my snap tool, I run through the objects and prebake pivots based on this calculation:
(it seems to work)

``````def checkRotatePivotOffset(obj):
"""
Checks if there is a relative difference between
object current worldspace position + pivots object space position
VS pivot worldspace position
Returns True if an offset exists
Returns False if there is NO offset
:param obj str: object name
"""
rotpiv_name = obj + '.rotatePivot'
rotpivws = om.MVector(mc.xform(rotpiv_name, q=1, ws=1, t=1))
rotpivos = om.MVector(mc.xform(rotpiv_name, q=1, os=1, t=1))

objposws = getObjWsPosVector(obj)
rotpivws2 = objposws + rotpivos

if rotpivws != rotpivws2:
return True
else:
return False
``````

So, you might already have this working but I’d like to propose an alternate method.

You can reliably do exactly what you need all in world space with some simple vector math and hierarchy won’t affect the result. The benefit of this is that the object-space axis you use is completely arbitrary. This is how I would do it

``````"""
For two selected transforms, snap the first to the position of the second along
the given local transform axis
"""
import pymel.core as pm

## The local-space axis of the first transform node.
## This is set to Y by default but could be set to any arbitrary axis, or combination of
##   axes such as X/Y or Z/Y.
localAxisA = pm.dt.Vector(0, 1, 0).normal()

## Get the selected transform nodes
transformNodeA, transformNodeB = pm.selected(type='transform')

## Record the world-space positions of the transforms (returned as pm.dt.Vector)
positionA = transformNodeA.getTranslation(worldSpace=True)
positionB = transformNodeB.getTranslation(worldSpace=True)

## Multiply the local axis by transform A's matrix to get the axis in world-space
worldAxisA = localAxisA * transformNodeA.getAttr('matrix')

## Get a world-space vector from transform A to transform B
vectAB = positionB - positionA

## Use a dot product to "project" the vector from transform A to B onto the world-space
##   axis of transform A
dot = vectAB.dot(worldAxisA)
translateDelta = worldAxisA * dot

## Add the result to the current position of transform A and set it
newPositionA = positionA + translateDelta
transformNodeA.setTranslation(newPositionA, worldSpace=True)
``````

(Edit: added python syntax highlighting to the script)

1 Like