I have a distributed application that based on the amount of work to do can spawn n number of pods in AWS EKS. n is not known ahead of time.
I want to use AWS FSx Lustre as my shared file system such that all the pods have access to the same data. Since this data will already exist (created via a separate process), I want to have a statically provisioned FSx Lustre file system.
I've gone through the examples (https://github.com/kubernetes-sigs/aws-fsx-csi-driver) and have proven how to have a pvc->pv definition for a single pod to access the file system. That works fine. However, now I need to expand it to allow n pods.
The only examples I have found for this is when you are using the Storage Class to dynamically provision the file system, which isn't my use case.
I want this to work similar to how one would put an nfs volume type definition into a pod definition (which allows n instances of that pod to access the same nfs server).
Is this possible?