3

I have a pod that uses 2 persistent volumes. The persistent volumes are in different zones. While deploying I get the following error:

node(s) had volume node affinity conflict

Any solution to the above problem?

Jonas
  • 121,568
  • 97
  • 310
  • 388

1 Answers1

4

I have a pod that uses 2 persistent volumes. The persistent volumes are in different zones.

The volumes that your Pod mount must be in the same Availability Zone, so that they can be mounted on the Node where the Pod is scheduled.

You can also use a Regional Persistent Volume by setting the StorageClass to regionalpd-storageclass but this is more expensive and slower and makes your volume mirrored in two zones. But this is a bit more complicated and probably not what you want to do.

Jonas
  • 121,568
  • 97
  • 310
  • 388
  • how do you get both the pod and pv to be in the same zone? – Kay Apr 24 '21 at 18:25
  • that depends on what storage system you use. If the PV is accesses over the network, the pod is scheduled to any node. If you use local volumes, you may use VolumeScheduling so that the pod is scheduled to the note where the PV is located. – Jonas Apr 24 '21 at 18:32
  • The storage is aws-ebs pv. I have a pod that spawns on a node because i used nodeSelector to make the service bound a specific node (set of nodes) and this node is in zone (2a) but the pv thats created is in a different zone (2c) – Kay Apr 24 '21 at 18:33
  • Im using https://github.com/bitnami/charts/tree/master/bitnami/mongodb which creates a pv per mongo node , not sure which one that falls under – Kay Apr 24 '21 at 18:38