2

I have a huge (30GB) ndarray memory-mapped:

arr = numpy.memmap(afile, dtype=numpy.float32, mode="w+", shape=(n, n,))

After filling it in with some values (which goes very fine - max memory usage is below 1GB) I want to calculate standard deviation:

print('stdev: {0:4.4f}\n'.format(numpy.std(arr)))

This line fails miserably with MemoryError.

I am not sure why this fails. I would be grateful for tips how to calculate these in a memory-efficient way?

Environment: venv + Python3.6.2 + NumPy 1.13.1

sophros
  • 14,672
  • 11
  • 46
  • 75

2 Answers2

1

Indeed numpy's implementation of std and mean make full copies of the array, and are horribly memory inefficient. Here is a better implementation:

# Memory overhead is BLOCKSIZE * itemsize. Should be at least ~1MB 
# for efficient HDD access.
BLOCKSIZE = 1024**2
# For numerical stability. The closer this is to mean(arr), the better.
PIVOT = arr[0]

n = len(arr)
sum_ = 0.
sum_sq = 0.
for block_start in xrange(0, n, BLOCKSIZE):
     block_data = arr[block_start:block_start + BLOCKSIZE]
     block_data -= PIVOT
     sum_ += np.sum(block_data)
     sum_sq += np.sum(block_data**2)
stdev = np.sqrt(sum_sq / n - (sum_ / n)**2)
kiyo
  • 1,929
  • 1
  • 18
  • 22
  • Thank you for the implementation however using NumPy `sum` is numerically unstable and precisely in the case of large matrices it fails really miserably ([see SO question](https://stackoverflow.com/questions/33004029/is-numpy-sum-implemented-in-such-a-way-that-numerical-errors-are-avoided)). The corrected (albeit slower) version below. – sophros Sep 19 '17 at 16:40
1
import math
BLOCKSIZE = 1024**2
# For numerical stability. The closer this is to mean(arr), the better.
PIVOT = arr[0]


n = len(arr)
sum_ = 0.
sum_sq = 0.
for block_start in xrange(0, n, BLOCKSIZE):
    block_data = arr[block_start:block_start + BLOCKSIZE]
    block_data -= PIVOT
    sum_ += math.fsum(block_data)
    sum_sq += math.fsum(block_data**2)

stdev = np.sqrt(sum_sq / n - (sum_ / n)**2)
sophros
  • 14,672
  • 11
  • 46
  • 75