2

I am trying to take backup of Google Filestore to a GCS bucket. I also want to rsync the contents of filestore in primary region to another filestore in secondary region.

For this, I have created a bash script which is working fine in compute engine VM. I have converted that into a docker container which I'm running as a kubernetes cronjob inside a GKE cluster.

But when I run the scripts inside the GKE pod, it is giving me the following error:

root@filestore-backup-1594023480-k9wmn:/# mount 10.52.219.10:/vol1 /mnt/filestore-primary 
mount.nfs: access denied by server while mounting 10.52.219.10:/vol1

I am able to connect to the filestore from the container:

root@filestore-backup-1594023480-k9wmn:/# telnet 10.52.219.10 111 
Trying 10.52.219.10... 
Connected to 10.52.219.10. 
Escape character is '^]'.

The pod ip ranges are also added to the VPC ip range. Filestore has been given full access to allow the VPC. The same script is working fine in compute engine VM.

Why is mounting a google filestore inside a GKE pod not working?


bash script used for taking backup of google filestore:

#!/bin/bash

# Create the GCloud Authentication file if set
    touch /root/gcloud.json
    echo "$GCP_GCLOUD_AUTH" > /root/gcloud.json
    gcloud auth activate-service-account --key-file=/root/gcloud.json


#backup filestore to GCS

DATE=$(date +"%m-%d-%Y-%T")

mkdir -p /mnt/$FILESHARE_MOUNT_PRIMARY
mount $FILESTORE_IP_PRIMARY:/$FILESHARE_NAME_PRIMARY /mnt/$FILESHARE_MOUNT_PRIMARY

gsutil rsync -r /mnt/$FILESHARE_MOUNT_PRIMARY/ gs://$GCP_BUCKET_NAME/$DATE/


#rsync filestore to secondary region

mkdir -p /mnt/$FILESHARE_MOUNT_SECONDARY
mount $FILESTORE_IP_SECONDARY:/$FILESHARE_NAME_SECONDARY /mnt/$FILESHARE_MOUNT_SECONDARY

rsync -avz /mnt/$FILESHARE_MOUNT_PRIMARY/ /mnt/$FILESHARE_MOUNT_SECONDARY/

All the variables are passed as environmental variables in the yaml.

srsn
  • 175
  • 11
  • if you can run you docker image ssh inside that container and try mount command manually and check what is happening. you can use `-vvvv` for verbose options. – Harsh Manvar Jul 06 '20 at 09:23
  • 1
    @HarshManvar I tried running the commands manually inside the pod and got the above error. You can also see that I have run the telnet command inside the container. – srsn Jul 06 '20 at 09:33
  • Using -vvvv option, root@filestore-backup-1594027980-t2fct:~# mount $FILESTORE_IP_PRIMARY:/$FILESHARE_NAME_PRIMARY /mnt/$FILESHARE_MOUNT_PRIMARY -vvvv mount.nfs: timeout set for Mon Jul 6 09:42:30 2020 mount.nfs: trying text-based options 'vers=4.2,addr=10.52.219.10,clientaddr=10.185.64.156' mount.nfs: mount(2): Permission denied mount.nfs: access denied by server while mounting 10.52.219.10:/vol1 – srsn Jul 06 '20 at 09:41
  • Check this answer: https://stackoverflow.com/questions/44442354/using-standalone-gsutil-from-within-gke and make the modifications needed for your case. – Will R.O.F. Jul 06 '20 at 17:03
  • @willrof I don't have any issue with gsutils. It is working fine. I'm not able to mount google filestore as NFS mount in the gke pod. – srsn Jul 06 '20 at 18:49
  • Are you using alias? Did you whitelisted the nodes IPs? Areyou using the masquerading agent? maybe the NFS is blocking the node IP. – Will R.O.F. Jul 07 '20 at 13:09
  • @willrof by default google filestore asks you to choose VPC which will be allowed to communicate with filestore. I have already added both Pod IP ranges as well as Service IP ranges in addition to subnet (node) IP ranges to the VPC which is authorized to filestore. I have also tried manually whitelisting Pod IP ranges in filestore but the issue remains. I also tried telnet-ing to filestore from the Pod and it is working. But still I'm getting the error that access denied. It seems to be an issue with google filestore. May be that google doesn't support mounting filestore to docker. – srsn Jul 07 '20 at 17:28
  • It would be great if some one from google can comment on it. – srsn Jul 07 '20 at 17:29

1 Answers1

3

The reason why you can't access it's because GKE has a different method for consuming filestore than other GCP instances, in order to be able to mount you have to create Persistent Volume and Persistent Volume Claims.

  • If you need only one static access to the filestore, you can follow this guide to manually set PV and PVC to attach to your application:

  • If you want to make it more dynamic and ready for broader use, consider using a NFS Client Provisioner. It will create a storageClass that can be reffered on your yamls. In a nutshell a storageClass dinamically provisions the PV and PVCs for each access. You can follow this guide:

  • Additionally, you can also use the Filestore CSI driver to enable GKE workloads to dynamically create and mount Filestore volumes without using helm. However, the CSI driver is not a supported Google Cloud product so you should consider if it fits your production environment:

Choose your path, if you have any questions let me know in the comments.

Will R.O.F.
  • 3,814
  • 1
  • 9
  • 19
  • 1
    Thanks for guiding me. I was using PVs and PVCs to mount filestore volumes to the containers before. But I was wondering why mounting it directly in the container is not working while it works when using compute engine VMs. That is why I asked this question. It seems to be design decision from google. Any way, I finally ended up creating the backup solution using kubernetes cronjobs by mounting filestore in the container using PVs and PVCs If any one is interested, the solution is available at https://github.com/sreesanpd/google-filestore-backup-kubernetes-cronjobs This can be closed – srsn Aug 05 '20 at 14:40