Multiply vector by scalar: order of operands

Hello,
I have a Vector class to handle vector calculations.

class Vector:
    def __init__(self, x, y, z):
        self.x = x
        self.y = y
        self.z = z

    def __mul__(self, scalar):
        return self.x * scalar + self.y * scalar + self.z*scalar

v = Vector(1, 1, 1)

So print 6*v is not working and print v*6 is working. Is there a way to write the class to handle both cases or do I need to put operands in proper order to do the math?

I am re-writing the vector class from c++ implementation here and I can see that all multiplication cases (vectorvector, vectorscalar and scalar*vector) are covered there. I am confused about how to do the same with Python.

// vec3 Utility Functions

inline std::ostream& operator<<(std::ostream &out, const vec3 &v) {
    return out << v.e[0] << ' ' << v.e[1] << ' ' << v.e[2];
}

inline vec3 operator+(const vec3 &u, const vec3 &v) {
    return vec3(u.e[0] + v.e[0], u.e[1] + v.e[1], u.e[2] + v.e[2]);
}

inline vec3 operator-(const vec3 &u, const vec3 &v) {
    return vec3(u.e[0] - v.e[0], u.e[1] - v.e[1], u.e[2] - v.e[2]);
}

inline vec3 operator*(const vec3 &u, const vec3 &v) {
    return vec3(u.e[0] * v.e[0], u.e[1] * v.e[1], u.e[2] * v.e[2]);
}

inline vec3 operator*(double t, const vec3 &v) {
    return vec3(t*v.e[0], t*v.e[1], t*v.e[2]);
}

inline vec3 operator*(const vec3 &v, double t) {
    return t * v;
}

inline vec3 operator/(vec3 v, double t) {
    return (1/t) * v;
}

How to manage right operand operators.
You have __mul__ for left and __rmul__ for right.

For the 2nd part, some languages have method overloading (its doable in python but I think generally discouraged?), or simply by using *args and checking the in type(s) yourself.

2 Likes

@ldunham1 's correct, you need to implement the __rmul__ and __rdiv__ for cases 1 * (1,2,3) or 2 / (1,2,3)

You’ll also need to add __iadd__, __isub__, __imul__ and __idiv__ for thing like

 v = (1,2,3)
 v+=  (2,3,4)
 # v should now be (3,5,7)

If you find yourself tempted to overload more methods to enhance the syntax (Like overloading __pow__ to be the cross product) think twice: it’s fun, but you ultimately run the risk of writing code only you can read :slight_smile:

2 Likes

Incidentally, here’s an old example of a python vector library. I notice I didn’t do the right operands!

Biggest design question for these guys is whether or not to make them immutable (my version does that). It’s got serious implications either way…

1 Like

I’ve been (relaltively) recently working on my own vector/transform library based off numpy.

The trick with it (like everything numpy) is including arrays of numbers. For instance there’s a Vector3Array along with Vector3… But I’m cheating :slight_smile: I get all the left/right operand properties by inheriting np.ndarray

1 Like

Anyone ever just use either Eigen or Euclid?

Is there any rationale behind rolling your own libaries beyond supporting environments running a Python interpreter that isn’t binary compatible with the vanilla one? The latest versions of Max/Maya/Houdini all appear to ship a vanilla interpreter these days so this incompatibility is (hopefully) a thing of the past.

I use Eigen when I’m building things in C++, but I didn’t know there were python bindings. I’ve never felt the need to check for it in python because I’ve got numpy which is easier to use.
Eigen and numpy can use similar backends (ie BLAS/LAPACK/MKL), so the expensive operations will have comparable speed. Unless there’s a very specific algorithm you want to use from Eigen, then numpy is the way to go, I think.

Euclid seems nice. Looks to be pure python. It’s like the end-game of what @kiryha is trying to do, and would probably be an awesome module to look at to get ideas.
Since it’s pure python, it’d probably be slow when working with lots of vertices. But if I don’t have access to numpy, that may be a useful library.

In a mostly-python environment, you could also use a pure python one (Euclid or your own) and compile it using Cython for what I’d expect to be serious perf gains, especially if the design were immutable