0

When setting up an FTP account for a specific purpose - e.g. as a drop-point for sharing data files - it seems sensible to give the user access only to the particular directory, and no view of a wider file system.

On *nix systems, in particular, every user generally has read access to a lot of system files such as /etc/passwd. FTP daemons generally allow you to hide these by executing a chroot on login, so that the user is in a virtual "jail".

But chroot was not designed as a security measure [archive copy as site seems down], and can even introduce security problems of its own; for this reason, vsftpd restricted this feature such that you can only chroot to a read-only directory, and the user must then navigate into a sub-directory to perform any write operations. ProFTPD warns of the problem but offers no alternative, and PureFTPD requires various special files to be created in order to even use a chroot.

It seems to me that there is no fundamental reason for the FTP access to map to the OS's notion of filesystem access at all; like an HTTP daemon, an FTP daemon could "rewrite" all requests according to a set of configuration rules. If you ask an Apache web host for the path /, it maps that to the directory defined in the DocumentRoot directory, not to the host OS's current / directory.

My question is, does any *nix FTP daemon use a "rewriting" mechanism like this (or some other way of limiting access), and if not, is there a fundamental reason?


Note: there is some overlap with this existing question, but the answers primarily discuss whether to use chroot or not, rather than complete alternatives.

IMSoP
  • 490
  • 2
  • 10
  • looks kind of like a shopping question. – Daniel Widrick Sep 12 '13 at 17:54
  • @lVlint67 Fair point. I'm not asking for opinions on "best" implementation, just if there is a design other than `chroot`, but I guess I am hoping for a specific implementation too. I am also interested in the abstract question, though: is FTP necessarily bound to file system operations, so that what I'm looking for is unreasonable, or is it just that the FTPD implementations I've found were all based on the same (arguably flawed) design? I'll leave it to you and others to decide if that's enough justification to leave it open. – IMSoP Sep 12 '13 at 18:50

1 Answers1

0

http://www.ietf.org/rfc/rfc959.txt

I am going to assume that the spec does not say that the 'destination' or server side has to point to a specific type of file system. Without reading too deeply into that, I suspect anyone could write a daemon that jailed users in any reasonable way and would still be viable.

Alternatively, something like selinux may be able restrict ftp users to certain directories without requiring a change to the ftp daemon.

Daniel Widrick
  • 3,488
  • 2
  • 13
  • 27
  • Wow, I'd forgotten just how venerable FTP is. Not only is that RFC nearly 28 years old, but there had already been 42 RFCs on the subject in the 14 years before that! To the point, it explicitly mentions handling of all sorts of incompatible file systems; the tendency of non-Unix servers to emulate a Unix path structure is a later cultural phenomenon. RFC 3659 *proposes* a [standard virtual file system] based on this practice, but isn't an accepted standard. So, historically, I guess a fully virtualised file system in an FTP daemon would not have been an obvious design. – IMSoP Sep 12 '13 at 20:56