I am working through the joblib
shared memory tutorial. It seems that numpy.memmap
dumps data to disk, which is unfortunate. However, using ramfs it should be theoretically possible to share memory between joblib processes on a linux box.
Is there a convenient pipeline for:
- Create a ramfs filesystem just big enough to hold a particular shape/dtype ndarray
memmap
that ndarray to that ramfs- Allow me to do what I want with it
- Clean up the
memmap
- Clean up the ramfs
I can invoke a bunch of subprocess.call(["mount",...])
type stuff, but this seems overly complex. Is there a better way to do this thing?