0

Trying to move all my EC2 applications to K8s. Looks like everything will be OK except mounted File Shares (of AWS FileGateway). Currently part of EC2 instances are using mounted S3 File Shares. These S3 buckets was mounted with command like

mount -t nfs -o nolock,hard [IP-ADDRESS]:/[BucketName] [MountPath] 

And looks like I can`t use it insude Pod of K8s (EKS).

How I can use it (mount it) inside K8s?

Any ideas? I am able to use EFS as mounted volumes, any options with this (EFS <-> GW FileShare)? Something else options?

UPDATE 1 One of the possible solutions, it is working, I tested it, how do you think - is it good solution?

I created PV/PVC connected to FileShare and mounted it to POD:

PV example:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: s3-file-gw-pv
  namespace: test
spec:
  capacity:
    storage: 10Gi     <------ not sure about value here
  accessModes:
    - ReadWriteMany
  storageClassName: aws-s3
  mountOptions:
    - tcp
    - nolock
    - hard
  nfs:
    server: [IP ADDRESS]
    path: "/[Bucket Name]"

and PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: s3-file-gw-pvc
  namespace: test
spec:
  storageClassName: aws-s3
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi    <- same here
  volumeName: s3-file-gw-pv
prosto.vint
  • 1,403
  • 2
  • 17
  • 30

1 Answers1

0

You need a generic NFS CSI driver. The EFS CSI only works with EFS. See https://docs.aws.amazon.com/filegateway/latest/files3/use-nfs-csi.html. If you want to mount an S3 bucket as a file system, you could try https://github.com/yandex-cloud/k8s-csi-s3 or the CSI driver for FSx for Lustre which can be populated from an S3 bucket.

Jeremy Cowan
  • 563
  • 4
  • 13
  • I will try to check nfs-csi driver, now looks like I found possible solution with ebs.csi.aws.com driver, I updated my first post – prosto.vint May 04 '23 at 14:32