I essentially have a confusion matrix of size n x n
with all my diagonal elements being 1
.
For every row, I wish to calculate its mean, excluding the 1
, i.e. excluding the diagonal value. Is there a simple way to do it in numpy
?
This is my current solution:
mask = np.zeros(cs.shape, dtype=bool)
np.fill_diagonal(mask, 1)
print(np.ma.masked_array(cs, mask).mean(axis=1))
where cs
is my n x n
matrix
The code seems convoluted, and I certainly feel that there's a much more elegant solution.