2

I have a kubernetes cluster spread across two zones, A and B. I am using nfs volumes for persistent storage. I have nfs volumes in both the zones. I am creating a stateful set of 2 replicas which will be spread across these zones (I used pod anti-affinity to achieve this). Now I want the pods in zone A to use the volumes in zone A and ones in zone B to use the volumes in zone B.

I can add labels to the persistent volumes and match the persistent volume claims with these labels. But how do I make sure that the pvc for a pod does not get bound to a pv in another zone?

raiyan
  • 821
  • 6
  • 15
  • How do you provision the PVs? If you're using dynamic volume provisioning with https://kubernetes.io/docs/concepts/storage/storage-classes/ and I'm not mistaken that should solve the problem? – Michael Hausenblas Jul 20 '18 at 11:24
  • There is no dynamic provisioning support for NFS as of now. And I am not sure if this -> https://github.com/kubernetes-incubator/external-storage is production ready. – raiyan Jul 20 '18 at 11:27
  • 1
    Yup, sorry, my bad. There's https://github.com/kubernetes-incubator/external-storage/tree/master/nfs but likely not production ready ;) – Michael Hausenblas Jul 20 '18 at 11:29
  • Is there any workaraound to achieve this? – raiyan Jul 20 '18 at 11:38
  • Hmmm, only thing I can think of right now is trying to see if you can use `nodeAffinity` (as described in https://kubernetes.io/docs/concepts/configuration/assign-pod-node/) for it? – Michael Hausenblas Jul 20 '18 at 11:51
  • Do you use cloud provider environment to provision K8s cluster? – Nick_Kh Jul 20 '18 at 15:01
  • no. It is a local cluster created manually with kubeadm. – raiyan Jul 22 '18 at 16:12

1 Answers1

2

You can try to bind persistent volume claims (PVCs) to persistent volumes (PVs) and split Kubernetes pods across your cluster between two zones using the special built-in label failure-domain.beta.kubernetes.io/zone. If you create volumes manually, it is possible to label them with failure-domain.beta.kubernetes.io/zone:zoneA value, ensuring that pod is only scheduled to nodes in the same zone as the zone of the persistent volume.

For example, to set label for a Node and PV:

kubectl label node <node-name> failure-domain.beta.kubernetes.io/zone=zoneA

kubectl label pv <pv-name> failure-domain.beta.kubernetes.io/zone=zoneA

Find some useful information from official Kubernetes documentation.

Nick_Kh
  • 5,089
  • 2
  • 10
  • 16
  • oh great. Thanks for this. I was using nodeAffinity and podAntiAffinity to achieve this. This seems so much simpler. – raiyan Jul 24 '18 at 06:46