So far I have 2 directories:
aws/
k8s/
Inside aws/
are .tf
files describing a VPC, networking, security groups, IAM roles, EKS cluster, EKS node group, and a few EFS mounts. These are all using the AWS provider, the state in stored in S3.
Then in k8s/
I'm then using the Kubernetes provider and creating Kubernetes resources inside the EKS cluster I created. This state is stored in the same S3 bucket in a different state file.
I'm having trouble figuring out how to mount the EFS mounts as Persistent Volumes to my pods.
I've found docs describing using an efs-provisioner pod to do this. See How do I use EFS with EKS?.
In more recent EKS docs they now say to use Amazon EFS CSI Driver. The first step is to do a kubectl apply
of the following file.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
images:
- name: amazon/aws-efs-csi-driver
newName: 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/aws-efs-csi-driver
newTag: v0.2.0
- name: quay.io/k8scsi/livenessprobe
newName: 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/csi-liveness-probe
newTag: v1.1.0
- name: quay.io/k8scsi/csi-node-driver-registrar
newName: 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/csi-node-driver-registrar
newTag: v1.1.0
Does anyone know how I would do this in Terraform? Or how in general to mount EFS file shares as PVs to an EKS cluster?