25

Is there anything I can do to speed up masked arrays in numpy? I had a terribly inefficient function that I re-wrote to use masked arrays (where I could just mask rows instead of make copies and delete rows as I was doing). However, I was shocked to find that the masked function was 10x slower because the masked arrays are so much slower.

As an example, take the following (masked is more then 6 times slower for me):

import timeit
import numpy as np
import numpy.ma as ma

def test(row):
   return row[0] + row[1]

a = np.arange(1000).reshape(500, 2)
t = timeit.Timer('np.apply_along_axis(test, 1, a)','from __main__ import test, a, np')
print round(t.timeit(100), 6)

b = ma.array(a)
t = timeit.Timer('ma.apply_along_axis(test, 1, b)','from __main__ import test, b, ma')
print round(t.timeit(100), 6)
Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
Scott B
  • 2,542
  • 7
  • 30
  • 44
  • 3
    Keep in mind that MaskedArrays are more of a convenience than a real solution. If you need to perform intensive computations on arrays arrays with missing/undefined values, you're in most cases better off dealing with the mask and the data yourself. Until a better implementation of missing/undefined values is baked in the NumPy code (which should happen some time soon), you are stuck with MaskedArrays. Yes, they are quite slow, because they're coded in pure Python, which of course cannot be as efficient as relying on some C code. – Pierre GM Aug 17 '12 at 11:46
  • Thanks for the question, this confirms what I suspected im my code – Markus Dutschke Aug 29 '17 at 10:11

2 Answers2

4

I have no idea why the masked array functions are moving so slowly, but since it sounds like you are using the mask to select rows (as opposed to individual values), you can create a regular array from the masked rows and use the np function instead:

b.mask = np.zeros(500)
b.mask[498] = True
t = timeit.Timer('c=b.view(np.ndarray)[~b.mask[:,0]]; np.apply_along_axis(test, 1, c)','from __main__ import test, b, ma, np')
print round(t.timeit(100), 6)

Better yet, don't use masked arrays at all; just maintain your data and a 1D mask array as separate variables:

a = np.arange(1000).reshape(500, 2)
mask = np.ones(a.shape[0], dtype=bool)
mask[498] = False
out = np.apply_along_axis(test, 1, a[mask])
Luke
  • 11,374
  • 2
  • 48
  • 61
  • 1
    I did end up doing something similar to your second example, but I needed the variable 'out' to have the same number of indices as there are rows in 'a'. See [This Question](http://stackoverflow.com/questions/5761642/python-numpy-get-index-into-main-array-from-subset) – Scott B Jul 07 '11 at 20:08
  • there is a problem in the second example: a.shape == (500, 2); a[mask].shape == (499, 2) – Markus Dutschke Mar 11 '21 at 10:00
  • It's the intended behavior for my suggested solution -- If the intent of the masked array is to ignore certain rows of the data, then use the mask array to just remove them entirely (so yes, this necessarily changes the shape of the array). – Luke Mar 15 '21 at 20:11
2

common workaround

The most efficient way I am aware of is to handle the mask manually. Here a short benchmark for calculating a masked mean along an axis. As of 2021 (np.version 1.19.2) the manual implementation is 3x faster.

It is worth noting, that

  • np.nanmean is as slow as ma.mean. However, I did not find an easy workaround for that, as 0 * nan -> nan and np.where is time consuming.
  • opencv usually has a mask argument for its routines. But switching library might not be suitable in most cases.

benchmark

benchmark manual (np.sum(..values..)/np.sum(..counts..))
    time for 100x np_mean: 0.15721

benchmark ma.mean
    time for 100x ma_mean: 0.580072

benchmark np.nanmean
    time for 100x nan_mean: 0.609166


np_mean[:5]: [0.74468436 0.75447124 0.75628326 0.74990387 0.74708414]
ma_mean[:5]: [0.7446843592460088 0.7544712410870448 0.7562832614361736
 0.7499038657880674 0.747084143818861]
nan_mean[:5]: [0.74468436 0.75447124 0.75628326 0.74990387 0.74708414]
np_mean == ma_mean:  True
np_mean == nan_mean:  True
np.__version__: 1.19.2

code

import timeit
import numpy as np
import numpy.ma as ma

np.random.seed(0)

arr = np.random.rand(1000, 1000)
msk = arr > .5  # POSITIV mask: only emelemts > .5 are processed

print('\nbenchmark manual (np.sum(..values..)/np.sum(..counts..))')
np_mean = np.sum(arr * msk, axis=0)/np.sum(msk, axis=0)
t = timeit.Timer('np_mean = np.sum(arr * msk, axis=0)/np.sum(msk, axis=0)', globals=globals())
print('\ttime for 100x np_mean:', round(t.timeit(100), 6))

print('\nbenchmark ma.mean')
ma_arr = ma.masked_array(arr, mask=~msk)
ma_mean = ma.mean(ma_arr, axis=0)
t = timeit.Timer('ma_mean = ma.mean(ma_arr, axis=0)', globals=globals())
print('\ttime for 100x ma_mean:', round(t.timeit(100), 6))

print('\nbenchmark np.nanmean')
nan_arr = arr.copy()
nan_arr[~msk] = np.nan
nan_mean = np.nanmean(nan_arr, axis=0)
t = timeit.Timer('nan_mean = np.nanmean(nan_arr, axis=0)', globals=globals())
print('\ttime for 100x nan_mean:', round(t.timeit(100), 6))

print('\n')
print('np_mean[:5]:', np_mean[:5])
print('ma_mean[:5]:', ma_mean[:5])
print('nan_mean[:5]:', nan_mean[:5])
print('np_mean == ma_mean: ', (np_mean == ma_mean).all())
print('np_mean == nan_mean: ', (np_mean == nan_mean).all())

print('np.__version__:', np.__version__)

The manaul Version only works if there are no nans in the array. If arr contains nans: Just construct the mask by msk = np.isnan(arr) and afterwards replace the nans in arr by arr = np.nan_to_num(arr, copy=False, nan=0).

Markus Dutschke
  • 9,341
  • 4
  • 63
  • 58
  • I don't get the same results when using the manual (np.sum(..values..)/np.sum(..counts)) and `np.nanmean` versions for `arr = np.random.rand(10,4); arr[arr<0.2]=np.nan;` and a mask equal to `msk = ~np.isnan(arr);`. The manual version doesn't evaluate the rows where there are nan, while the nanmean does. – m_power Jun 12 '21 at 13:34
  • @m_power answer is updated. the problem with nans is, that 0*nan = nan – Markus Dutschke Jun 12 '21 at 16:03
  • 1
    Thanks! It works when using `np.nan_to_num` after `msk = ~np.isnan(arr)`. `msk = np.isnan(arr)` doesn't work if you want to avoid the nans. – m_power Jun 12 '21 at 17:21