0

I create a Deployment with a volumeMount that references a PersistentVolumeClaim along with a memory request on a cluster with nodes in 3 difference AZs us-west-2a, us-west-2b, and us-west-2c.

The Deployment takes a while to start while the PersistentVolume is being dynamically created but they both eventually start up.

The problem I am running into is that the PersistentVolume is made in us-west-2c and the only node the pod can run on is already over allocated.

Is there a way for me to create the Deployment and claim such that the claim is not made in a region where no pod can start up?

Michael
  • 546
  • 1
  • 7
  • 19
  • My workaround for now is to add a taint to the node that is over allocated and that seems to ensure the claim and deployment don't end up on the over alllocated machine. ``` kubectl taint nodes node1 key=value:NoSchedule ``` – Michael Mar 25 '19 at 19:58
  • Mevermind I was just getting lucky, it created the volume on us-west-2c again and my ec2 worker node on us-west-2c had the taint but not luck – Michael Mar 25 '19 at 20:01

1 Answers1

1

I believe you're looking for Topology Awareness feature.

Topology Awareness

In Multi-Zone clusters, Pods can be spread across Zones in a Region. Single-Zone storage backends should be provisioned in the Zones where Pods are scheduled. This can be accomplished by setting the Volume Binding Mode.

Kubernetes released topology-aware dynamic provisioning feature with kubernetes version 1.12, and I believe this will solve your issue.

clxoid
  • 2,577
  • 12
  • 21