0

I am having difficulty using gcsfuse mounting a storage bucket to a directory that is being used for SFTP uploads on a CentOS 7 compute engine VM in GCP. I initially tried to mount it using the basic instructions that mounted like the following:

GOOGLE_APPLICATION_CREDENTIALS=/root/service_account.json gcsfuse al-dev-sftp /sftp

but I couldn't create files/directories in the mounted directory. After researching I then mounted the drive like:

GOOGLE_APPLICATION_CREDENTIALS=/root/service_account.json gcsfuse -o allow_other --gid 0 --uid 0 --file-mode 777 --dir-mode 777 al-dev-sftp /sftp

which allowed the creation of files and directories. However, I couldn't chown/chmod any of the files or directories as needed to chroot jail user directories as a normal use case for SFTP.

To try to overcome this, I created symlinks to the mounted SFTP storage drive which was fine until we noticed that placing files on SFTP was syncing correctly, but if we placed a file in the bucket, it wasn't syncing back to the mounted SFTP directory. I tried doing some rsync'ing but no matter which direction I tried, we lost files either way. (I later read that gcsfuse doesn't support linking).

Doing some more research I found a github project for a kubernetes gcs sftp setup that looked promising. I'm not using containers in any way but I was interested in how the mount was done as well as any SFTP config used. This led me to create a mount like so:

 gcsfuse --uid 1000 --gid 1001 -o nonempty wraheem /test_sftp/wraheem/upload

which will mount my upload directory to a separate bucket for each user (not ideal, but if I can make it work then OK) and I can place root ownership on the user folder while giving permission to the user on the upload folder perfectly jailing the user in SFTP.

The problem with this is that when the user logs in via SFTP (in filezilla) and tries to navigate to their upload folder, they get an error message:

Status:         Retrieving directory listing of "/test_sftp/wraheem/upload"...

Command:    cd "/test_sftp/wraheem/upload"

Error:          Directory /test_sftp/wraheem/upload: no such file or directory

which obviously exists.

My sshd config entry is:

Match User wraheem
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /test_sftp/wraheem
AllowTcpForwarding no

which works fine with any other normal directory with the jail pattern of root owning the user directory and the user owning the subdirectories.

I looked at the mount itself and saw the following:

wraheem on /test_sftp/wraheem/upload type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions)

and what stood out to me is that user_id and group_id is set to zero (root) even though I can add files and when doing an ls -l the folder and file permissions are set:

drwxr-xr-x. 1 wraheem wraheem 0 Aug 22 20:16 upload

I wonder if even though I have permissions the mount point is owned by root so it doesn't even matter. As a test I tried setting the default remote directory to /media (owned by root) and got the same exact error in filezilla as before.

Has anyone used gcsfuse in this manner for SFTP? Or can anyone see where I missed a step of configuration issue?

Thanks for any help you guys can think of; I'm wondering if this can even be used for SFTP at this point...

wali
  • 51
  • 3
  • For the “no such file or directory” error, if objects are created from a different tool (GCP console), you may need --implicit-dirs flag as mentioned in this [document](https://github.com/GoogleCloudPlatform/gcsfuse/blob/master/docs/semantics.md) – Fady Aug 24 '18 at 23:52
  • Another approach could be using the default GCE service account the instance was created with, and change the scopes of the instance itself per this [Google Group discussion](https://groups.google.com/forum/#!msg/gce-discussion/594J75Vj1oM/C3tLcDQCAgAJ). Also, your users can mount the bucket in their home directories (assuming you added the public keys to the GCE metadata for each of the users). – Fady Aug 25 '18 at 00:01

0 Answers0