1

I am working through the joblib shared memory tutorial. It seems that numpy.memmap dumps data to disk, which is unfortunate. However, using ramfs it should be theoretically possible to share memory between joblib processes on a linux box.

Is there a convenient pipeline for:

  1. Create a ramfs filesystem just big enough to hold a particular shape/dtype ndarray
  2. memmap that ndarray to that ramfs
  3. Allow me to do what I want with it
  4. Clean up the memmap
  5. Clean up the ramfs

I can invoke a bunch of subprocess.call(["mount",...]) type stuff, but this seems overly complex. Is there a better way to do this thing?

Him
  • 5,257
  • 3
  • 26
  • 83
  • If anyone knows of a "ramfs"-relevant tag to add to this question, that would be super. – Him Dec 09 '19 at 21:22

0 Answers0