I have one simple 3D array a1
, and its masked analog a2
:
import numpy
a1 = numpy.array([[[ 0.00, 0.00, 0.00],
[ 0.88, 0.80, 0.78],
[ 0.75, 0.78, 0.77]],
[[ 0.00, 0.00, 0.00],
[ 3.29, 3.29, 3.30],
[ 3.27, 3.27, 3.26]],
[[ 0.00, 0.00, 0.00],
[ 0.41, 0.42, 0.40],
[ 0.42, 0.43, 0.41]]])
a2 = numpy.ma.masked_equal(a1, 0.)
I want to perform the mean of this array along several axes at a time (this is a peculiar, undocumented use of axis
argument in numpy.mean
, see e.g. here for an example):
numpy.mean(a1, axis=(0, 1))
This is working fine with a1
, but I get the following error with the masked array a2
:
TypeError: tuple indices must be integers, not tuple
And I get the same error with the masked version numpy.ma.mean(a2, axis=(0, 1))
, or if I unmask the array through a2[a2.mask]=0
.
I am using a tuple for the axis
argument in numpy.mean
as it is actually not hardcoded (this command is applied on arrays with potenially different number of dimensions, according to which the tuple is adapted).
Problem encountered with numpy
version 1.9.1
and 1.9.2
.