0

I have a distributed application that based on the amount of work to do can spawn n number of pods in AWS EKS. n is not known ahead of time.

I want to use AWS FSx Lustre as my shared file system such that all the pods have access to the same data. Since this data will already exist (created via a separate process), I want to have a statically provisioned FSx Lustre file system.

I've gone through the examples (https://github.com/kubernetes-sigs/aws-fsx-csi-driver) and have proven how to have a pvc->pv definition for a single pod to access the file system. That works fine. However, now I need to expand it to allow n pods.

The only examples I have found for this is when you are using the Storage Class to dynamically provision the file system, which isn't my use case.

I want this to work similar to how one would put an nfs volume type definition into a pod definition (which allows n instances of that pod to access the same nfs server).

Is this possible?

Craig
  • 1
  • 1
  • You can use static or dynamic provisioning to consume the same persistent volume claim from multiple pods from different nodes. I found an example on https://awslabs.github.io/kubeflow-manifests/docs/deployment/add-ons/storage/fsx-for-lustre/guide/#20-setup-fsx-for-lustre. – Jeremy Cowan May 08 '23 at 16:24
  • Thank you for the pointer. However, can this only be multiple pods on different nodes? That severely limits the amount of pods that can be attached (ie. 1 per node). – Craig May 11 '23 at 11:36
  • No, I don't believe so. You should be able to run multiple pods on a node and each can have a connection to shared storage. Have you tried yet? – Jeremy Cowan May 11 '23 at 13:45
  • thanks. I tried creating a pvc and then having multiple pods reference that single pvc. I didn't know this was possible but it seems to work fine. thanks for the pointer and nudge. – Craig May 16 '23 at 14:26

0 Answers0