I am working on 3D image segmentation with deep learning. Basically, I need to 1/ pad a numpy array, 2/ process the array, 3/ unpad the array.
dataArray = np.pad(dataArray, 25, mode='constant', constant_values=0) # pad
processedArray = my_process(dataArray) # process
processedArray = processedArray[25:-25, 25:-25, 25:-25, :] # unpad
Problem is, processedArray is very large (464,928,928,928,10) and I run into out of memory when executing the unpadding. I imagine that the unpadding allocates new memory? Am I right? How could I proceed so that no new memory is allocated? In other words, so that index points to unpadded elements, without copying the elements?
Information that might help: above lines are executed in a function, and processedArray is returned