4

PyFilesystem (fs on pip) is a great library that supports in-memory filesystem creation with Python. However, I am looking to create and maintain a filesystem in Python in one process and dynamically access that filesystem in Python in another process.

Here is the barebones docs for the MemoryFS class but it doesn't appear to be usable like that. It can open from a "path", but that path does not mean the same thing in two different processes. It appears they are (understandably) completely sandboxed.

Is this possible in PyFS? If not, is there an alternative way in Python? If not, is there a similar cross-platform solution for a ram-disk that would function in this way?

Matthew Mage
  • 395
  • 5
  • 18
  • Welcome to StackOverflow. Please read and follow the posting guidelines in the help documentation, as suggested when you created this account. [On topic](http://stackoverflow.com/help/on-topic) and [how to ask](http://stackoverflow.com/help/how-to-ask) apply here. StackOverflow is not a design, coding, research, or tutorial service. – Prune Jun 29 '18 at 22:46
  • 1
    Excuse me? I don't think I did anything that went against the guidelines. I already suggested a solution that did not work, and I am looking for an alternative. – Matthew Mage Jun 29 '18 at 22:50
  • @MatthewMage Would a simple ramdisk work for you? – jtlz2 Jan 31 '20 at 12:13

1 Answers1

1

The original PyFilesystem had tools to do just that. You could expose a filesystem via xmlrpc for example, and connect to it via an FS object.

PyFilesytem2 doesn't have such functionality. Although v2 has been designed to make implementing 'remote filesystems' much easier.

I'm not sure what your use case is, but you could store your data on an ftp server or Amazon S3. Both of which are supported by PyFilesystem. Any particular reason to want an in-memory solution?

The PyFilesystem mailing list me be a better place to brainstorm about such things.

Will McGugan
  • 2,005
  • 13
  • 10
  • 2
    Any plans on reimplementing that functionality? We are looking to have a MemoryFS maintained by one process, which is where downloaded files are stored, and have that be available to be accessed by others. We don't want to be redownloading the data from a server every time the access process is run. So maintaining the data on FTP or S3 is already what we are doing in a way. – Matthew Mage Jun 30 '18 at 17:11
  • 1
    We also would like to prevent touching disk, if possible. So unfortunately your proposed solution does not fit our problem set. Thanks for the comment though! – Matthew Mage Jun 30 '18 at 17:12
  • 1
    No immediate plans, but that's not to say it will never happen. From the sound of it, you might want to consider a caching proxy, or roll your own caching with memcached or Redis. – Will McGugan Jul 01 '18 at 13:07
  • 2
    Re "Any particular reason to want an in-memory solution?", I am seeking a way to share memory among independent processes, or at least share files. I can do this now with the OS's file system. But the processes have to poll the directory for changes, resulting in performance tradeoffs. If we could have a mountable memory fs that could be shared among processes like physical file systems are, this would enable what I would like to do. – philologon Jan 27 '19 at 23:06
  • 1
    @WillMcGugan Still the case that there are no plans? :( – jtlz2 Jan 31 '20 at 11:30
  • Is pyfilesystem v1 still around perhaps? – jtlz2 Jan 31 '20 at 11:31