It is possible to compute the average of a numpy array over multiple dimensions, as in eg. my_ndarray.mean(axis=(1,2))
.
However, it does not seem to work with a masked array:
>>> import numpy as np
>>> a = np.random.randint(0, 10, (2, 2, 2))
>>> a
array([[[0, 9],
[2, 5]],
[[8, 6],
[0, 7]]])
>>> a.mean(axis=(1, 2))
array([ 4. , 5.25])
>>> ma = np.ma.array(a, mask=(a < 5))
>>> ma.mean(axis=(1, 2))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/site-packages/numpy/ma/core.py", line 5066, in mean
cnt = self.count(axis=axis)
File "/usr/lib/python2.7/site-packages/numpy/ma/core.py", line 4280, in count
n1 = np.size(m, axis)
File "/usr/lib/python2.7/site-packages/numpy/core/fromnumeric.py", line 2700, in size
return a.shape[axis]
TypeError: tuple indices must be integers, not tuple
How can I compute the average of a masked array over multiple axis, preferably as simply as it would be for a normal array?
(I would rather use a solution that does not implies defining a new function, as proposed in this answer.)