1

I have a configuration in which I want to use a huge amount (1000-10000) of self mounts (mount --bind). My filesystem is ext4 in RAID 1 with two 400GB HDDs.

Basically, what I am trying to do is making files available, that are synchronized from a local servers with low bandwith to the internet, via ftp and eventually a webinterface on top. These files come from many different sources and I found it viable to just chroot my round about 1000 users to individual directories and putting self-mounted reference to the different synchronized paths in them, according to the individuals rights. Symlinks do not work because of the chroot jail. I am using vsftpd.

Is there a performance or any different problem with that?

voretaq7
  • 79,879
  • 17
  • 130
  • 214
HalloDu
  • 121
  • 1
  • 9

2 Answers2

1

I can't think of a better way of accomplishing what you're looking to do (short of rsyncing the data to each directory it should be in -- messy & disk-space intensive.

Performance-wise I doubt this will be an issue, though you may have to tweak /proc/sys/super-max if you run out of slots for mounted filesystems (I'm not sure if --bind takes up a slot in the mounted FS superblocks list or not).

That being said, there are lots of reasons not to use mount --bind, this one being one of my favorites. If a quick google search doesn't turn up any egregiously bad consequences I think you're probably OK doing this, though it's definitely odd and should be extensively documented :)

voretaq7
  • 79,879
  • 17
  • 130
  • 214
  • I tried it, and it seems to work fairly well. The directory structure there is entirely managed by a script, so that should not be a problem. – HalloDu Feb 11 '11 at 10:49
0

If this is being done with FTP, could you not just use the built in /home/$user subsystem that most FTP daemons use? Most popular daemons can do the chroot for you. This would save you from binding on the filesystem level if its only going to need to be enforced via FTP.

jizaymes
  • 222
  • 1
  • 3