-1

I'm running into a weird issue with my newer deployments where volumes aren't mounting correctly.

Example..

There are PV/PVCs for three NFS directories that relate to one deployment:

  • NFS/in
  • NFS/out
  • NFS/config

In the deployment, those PVCs mount to the corresponding volumeMounts

  • volumeMounts/in
  • volumeMounts/out
  • volumeMounts/config

With my older deployments, this works as expected. With the new deployments, the NFS directories are mounting to the incorrect mount points... The contents of NFS/in are mounted in volumeMounts/config. NFS/config are mounted in volumeMounts/in.

This is vanilla Kubernetes on a bare metal node. The only configuration change from default that has been made was yanking PVC protection due to PVCs not being deleted on request:

kubectl patch pvc PVC_NAME -p '{"metadata":{"finalizers": []}}' --type=merge

Any ideas on what causes the directories to mount in the incorrect volumeMounts?

Chase Westlye
  • 381
  • 2
  • 6
  • 20
  • Might be hard to identify without seeing exact resource YAMLs. Blind guess - anyhow related to type of storage class chosen? – dimon222 Jun 27 '20 at 02:08

1 Answers1

0

You have to set ClaimName though your deployment or statefulset :

...
apiVersion: apps/v1
kind: StatefulSet
.......
   containers:
      - name: container-name
        image: container-image:container-tag
        volumeMounts:
        - name: claim1
          mountPath: /path/to/directory
   volumes:
      - name: claim1
        persistentVolumeClaim:
          claimName: PVC_NAME
...
Bayu Dwiyan Satria
  • 986
  • 12
  • 28