In the python documentation, it is said that __mul__
will be firstly called to implement the binary arithmetic operations *
. __rmul__
will not be called unless __mul__
is not supported or the right operand is subclass of the left operand. However, for the following code:
import numpy as np
a = [1, 2, 3]
b = np.array(2)
print("a * b:", a * b)
print("a.__mul__(b):", a.__mul__(b))
print("b.__rmul__(a):", b.__rmul__(a))
My first thought was that the result of a * b
should be [1, 2, 3, 1, 2, 3]
, just identical to a * 2
. However, the actual output is:
a * b: [2 4 6]
a.__mul__(b): [1, 2, 3, 1, 2, 3]
b.__rmul__(a): [2 4 6]
It seems that a * b
calls b.__rmul__(a)
. However, in this case, a.__mul__(b)
is implemented and gives the expected result, and np.ndarray
is apparently not the subclass of list
.
So, my question is, what is really going on in this example, and how does python choose between the binary arithmetic operations and their reflected operands?
UPDATE: Thanks to hpaulj, something more interesting is found from the following code:
import numpy as np
class Foo1(list):
pass
class Foo2(list):
def __mul__(self, other):
print('in Foo2.__mul__')
return super().__mul__(other)
print('Foo1:')
print(Foo1([1, 2, 3]) * np.array(3))
print('Foo2:')
print(Foo2([1, 2, 3]) * np.array(3))
The output is:
Foo1:
[3 6 9]
Foo2:
in Foo2.__mul__
[1, 2, 3, 1, 2, 3, 1, 2, 3]
As both Foo1
and Foo2
inherit from list
, the only difference is Foo2
overwrites __mul__
, and explicitly calls super().__mul__
. But the Foo1
and Foo2
give totally different result! So, as hpaulj said, there should be some sort of special relation between list
and np.ndarray
. However, it really seems weird as np.ndarray
comes from a third-party library and there should be no way it can modify the behavior of builtin types.