4

I'm mounting my /home directory remotely using sshfs. Since UIDs and GIDs aren't the same on the server and client, I'm using idmap=file. Additionally, because of application requirements, I must mount all of /home rather than individual user directories.

sshfs_uids:

user1:1001
user2:1000

sshfs_gids:

user1:1001
user2:1000

Command to mount:

sudo sshfs -o nonempty -o transform_symlinks -o hard_remove -o allow_other -o nomap=ignore -o idmap=file -o uidfile=/root/sshfs_uids -o gidfile=/root/sshfs_gids root@myserver:/home /home

When reading files, everything works as expected (files that should be owned by user1:user1 are indeed so). However, when I write as user1, this happens:

user1@myclient:~$ touch foo
user1@myclient:~$ ls -l foo
-rw-r--r--. 1 root root 0 Jun 13 13:54 foo

My user writes files as root! Even doing a ls -l from myserver turns up the same root ownership. I can fix it manually, though:

user1@myclient:~$ chown user1:user1 foo
user1@myclient:~$ ls -l foo
-rw-r--r--. 1 user1 user1 0 Jun 13 13:54 foo

Is it possible, using a sshfs or fuser option, to make it such that new files are owned by the user that created them? If not, can I make sshfs or fuser call a custom script every time a file is written so that I can fix the file's ownership using chown?

EDIT:
If neither of the above are possible, can anyone recommend some alternative remote filesystem software that is:

  • secure for use over the public internet
  • transparent (after setup) to users/scripts (so not plain scp)
  • Nathan Vance
    • 139
    • 1
    • 5
    • You are logging in as `root` -> `root@my_server`. What do you expect? Works as designed. – Thomas Jun 13 '17 at 17:02
    • I'm well aware of that. I assume you mean to say that my first question about an option to automatically set my desired ownership is a no-go. How about the callback script? How about an alternative solution? – Nathan Vance Jun 13 '17 at 17:05
    • Has it to be a *...file system software that use ssh keys...*? Why not use NFS or Samba? – Thomas Jun 14 '17 at 07:46
    • @Thomas, The server and client are separated over the open internet, so I would have to find some other way of securing it such as going through a VPN or using Kerberos with NFS. I'm open to running things through a VPN as a last resort (downside is it's slow). From what I've read, Kerberos seems like more trouble than it's worth, not only in its setup but also its usage. But you're right, I'll edit the question to relax my requirements on alternative software. – Nathan Vance Jun 14 '17 at 15:14
    • This seems like a bug in `sshfs`. – JamesThomasMoon Sep 05 '17 at 19:06

    3 Answers3

    0

    I suspect you were trying to mount sshfs with root (sudo) hence your files will be created with the user the filesystem was mounted.

    chicks
    • 3,793
    • 10
    • 27
    • 36
    0

    the issue in this case is the fact that you are connecting as root

    root@myserver:/home /home
    

    for reading, it retrieves the information as is, but when you write, the system will write using the user that you have.

    Keep in you mind, that the server is writing the data, so it will write using the user that connected to it. The solution is connect using the user and its credentials that you do want to write data using:

    user1@myserver:/home /home
    
    Thiago Conrado
    • 265
    • 3
    • 7
    -1

    For anyone else having a similar issue, I found that ssh-tunneling nfs did the trick.

    /etc/exports on myserver:

    /home localhost(insecure,rw,sync,no_subtree_check,no_root_squash)
    

    /etc/idmap.conf on myserver and myclient (citation):

    ...
    Domain = localdomain
    ...
    

    /etc/modprobe.d/nfsd.conf on myserver:

    options nfsd nfs4_disable_idmapping=0
    

    /etc/modprobe.d/nfs.conf on myclient:

    options nfs nfs4_disable_idmapping=0
    

    The above two files set /sys/module/nfsd/parameters/nfs4_disable_idmapping and /sys/module/nfs/parameters/nfs4_disable_idmapping to "N" on boot (citation).

    Either reboot, or restart nfs/idmap related services on both machines and run nfsidmap -c. Then, tunnel the connection:

    user1@myclient:~$ ssh -fN -L 3049:localhost:2049 user1@myserver
    user1@myclient:~$ sudo mount -t nfs4 -o port=3049 localhost:/home /home
    

    At this point, the firewall for myserver only has to be open on port 22, and the nfs traffic will be as secure as ssh.

    EDIT:
    This didn't work, but only appeared to work. Apparently, while one might think that idmap should map IDs, it doesn't. At least, it only does so at a high level, so certain operations slip past.

    Nathan Vance
    • 139
    • 1
    • 5