0

I have a tree of files and folders, most of which are owned by the principal user we'll call "laurel". One of the subtrees is wholly owned by another user called "hardy". Finally, because Laurel & Hardy like mysql, there's a mysql data directory owned by the "mysql" user. All three users exist on the system, although "mysql" doesn't have a login shell.

(Let's put the subtree at ~/subtree.)

I would like to move the subtree to a shared system. I've created a dir /mnt/data and a dir on the remote server /data/main and used the following command to mount remote:/data/main (owned by remote user "ubuntu") on to /mnt/data:

sshfs -o idmap=user -o reconnect -o allow_other -o ServerAliveInterval=15 -o IdentityFile=$HOME/.ssh/id_rsa  ubuntu@$IP:/data/main /mnt/data

However, I can't figure out how to get my 3-owner file tree into the remote system, or even if I can. If I run cp -r ~/subtree /mnt/data/ I get permission-denied errors on some of the mysql files in ~/subtree, which have perms 0700:

cp: cannot open `mysql/data/ib_logfile0' for reading: Permission denied

If I run sudo cp -r ~/subtree /mnt/data/ the resulting files are now owned by laurel, and mysqld will no longer work, because it wants the files to be owned by mysql. If I run sudo cp -r -p ~/subtree /mnt/data/, I get these error messages:

cp: failed to preserve ownership for `/mnt/data/mysql/data/ib_logfile0': Permission denied
...  # and on and on for every other file owned by mysql

I built a kludgy system that works, but it's awful. I created 3 non-system users on my server, copied the keys there, and made 3 separate calls to sshfs to set them up:

# As laurel:
sshfs -o idmap=user -o reconnect -o allow_other -o ServerAliveInterval=15 -o IdentityFile=$HOME/.ssh/id_rsa  ubuntu@$IP:/data/main-laurel /mnt/data-main

sudo -u hardy sshfs -o idmap=user -o reconnect -o allow_other -o ServerAliveInterval=15 -o IdentityFile=/home/hardy/.ssh/id_rsa  hardy@$IP:/data/main-hardy /mnt/data-hardy

sudo -u mysql sshfs -o idmap=user -o reconnect -o allow_other -o ServerAliveInterval=15 -o IdentityFile=/home/mysql/.ssh/id_rsa  ubuntu@$IP:/data/main-mysql /mnt/data-mysql

cd ~/subtree
cd mysql
sudo -u mysql cp -r data /mnt/data-mysql/
sudo -u mysql rm -fr data
cd ../
sudo -u hardy cp -r hardy /mnt/data-hardy/
sudo -u hardy rm -fr hardy
cd ..
cp -r subtree /mnt/data-main/
rm -fr subtree

# And now link everything back together

ln -s /mnt/data-main/subtree .
cd /mnt/data-main/subtree
ln -s ../../data-hardy/hardy .
cd mysql
ln -s /mnt/data-mysql/data .

Did I mention that I had to setup 'mysql' and 'laurel' as non-system users on both machines? Otherwise sshfs doesn't work.

This works, but there's a further complication. This is a comedy server, and users can create new container-like objects, each of which has its own newly-created owner. So when the user tries to create a new object, say keaton, it will immediately fail because the sshfs link won't recognize the new owner, expecting new objects to be owned by laurel. Unless before the user creates a new object, they create it on the remote server first, and then the system can set up another link like the hardy one I show above. I don't think this will go over very well.

Eric
  • 99
  • 6

3 Answers3

1

What you want to have happen isn't really possible with SSHFS. The primary reason is that SSHFS runs as a specific user on the remote host and that user isn't allowed to change the owner information of files located at the remote host. I don't know if SSHFS even supports this if you mount using root as the remote user (I don't recommend this even if it does work so I didn't test it).

You are much better off using another solution for remote files if possible. NFS or CIFS (Samba) both work well but require quite a bit of setup and you need to be mindful of security.

David
  • 606
  • 4
  • 6
  • Plan B has been to try nfs. I went with sshfs because I have some experience with it. – Eric Sep 15 '16 at 16:16
1

Easiest way to do this is either using rsync for the job, or pipe tar via SSH to the remote server.

tar cf - /path/to/source | ssh user@remote.server "cat > /path/temp/dest/file.tar"

On destination server after this:

cd /path/to/destination
tar xf /path/temp/dest/file.tar
Tero Kilkanen
  • 36,796
  • 3
  • 41
  • 63
  • I need a live mount point. Think of working off a persistent backing store from a volatile VM. – Eric Sep 15 '16 at 16:14
1

If you absolutely must use SSHFS for this arguably bizarre scenario, you need to use multiple SSHFS mounts to accomplish this without permissions errors. Specifically, one per user.

Spooler
  • 7,046
  • 18
  • 29
  • That's exactly what I tried. The problem I ran into was when my app created a new user to house the new container, that worked. But as soon as it tried to create a file owned by that user, it got a permission-denied error, because the file was supposed to be created in a mount point owned by `laurel`, and the sshfs mount point correctly didn't recognize `laurel`. – Eric Sep 15 '16 at 16:14
  • Do the UID's match? Gotta match. – Spooler Sep 15 '16 at 23:09
  • They can't match, because the app creates a new user on the client side of the wire and immediately tries to chown a newly created file as that new user. The user doesn't exist on the server side of the wire, so no match, which is why sshfs isn't a good solution for the app. – Eric Sep 16 '16 at 16:35
  • True. Only way that would really work is LDAP or similar. – Spooler Sep 18 '16 at 09:32