1

I have 21 SSHFS mounts from a Debian server to another one. The servers are on a 1Gbps LAN. When I mount these 21 FS, 675 MB of real memory (not buffer nor cache) are allocated on the server mounting the resources (the one acting as a client). I also tried the option "-o cache=no", but it didn't change anything.

Since I'll need to mount via SSHFS some hundreds of filesystems in production, with these memory usage it will never be scalable. Is it normal for SSHFS mounts to consume all this RAM? Is there anything I can do to reduce it? As I said, they're linked with a 1Gbps LAN and latency on file access is not critical for the project, so caching is not required.

lucaferrario
  • 111
  • 2
  • 7

1 Answers1

2

Yes, this is normal.

Why don't you build a tunnel between the two machines, let's say with OpenVPN, and use normal networked file systems instead like NFS then? Or something else which might suit more your needs.

Marc Stürmer
  • 1,904
  • 13
  • 15
  • And are you sure that 200 NFS mounts will consume significantly less memory than 200 SSHFS mounts? – lucaferrario Jul 31 '14 at 13:21
  • SSHFS is a FUSE filesystem, meaning most of it is being handled by separate processes. NFS is being handled mostly in kernel space. In the end it seems to me you are overusing SSHFS for stuff it was never meant to be or even scale at. I am sure moving to a different approach would be a wise thing to do and there's only one way to get clear about it: try it out. – Marc Stürmer Jul 31 '14 at 13:24