I'm running into a weird issue with my newer deployments where volumes aren't mounting correctly.
Example..
There are PV/PVCs for three NFS directories that relate to one deployment:
- NFS/in
- NFS/out
- NFS/config
In the deployment, those PVCs mount to the corresponding volumeMounts
- volumeMounts/in
- volumeMounts/out
- volumeMounts/config
With my older deployments, this works as expected. With the new deployments, the NFS directories are mounting to the incorrect mount points... The contents of NFS/in are mounted in volumeMounts/config. NFS/config are mounted in volumeMounts/in.
This is vanilla Kubernetes on a bare metal node. The only configuration change from default that has been made was yanking PVC protection due to PVCs not being deleted on request:
kubectl patch pvc PVC_NAME -p '{"metadata":{"finalizers": []}}' --type=merge
Any ideas on what causes the directories to mount in the incorrect volumeMounts?