0

I would like to mount an amazon ebs volume (with data on it) to my pod. The problem is that I didn't find a way to determine in advance the availability zone of the pod before starting it. If the pod doesn't start on the same availability zone of the volume, it leads to a binding error.

How can I specify or determine the availability zone of a pod before starting it?

Noé
  • 498
  • 3
  • 14

2 Answers2

2

You use the topology.kubernetes.io/zone label and node selectors for this kind of thing. However unless you're on a very old version of Kubernetes, this should be handled automatically by the scheduler.

coderanger
  • 52,400
  • 4
  • 52
  • 75
  • I rode the k8s documentation and I don't understand how it can help me. As far as I undertood it helps service to reach "closest" node. – Noé Mar 13 '20 at 18:53
  • In the current version, volumes apply topology limits as part of node scheduling. So if a volume attached to the pod is in us-east-1a then only nodes in that zone will be considered. – coderanger Mar 13 '20 at 19:46
  • I have istio sidecars running on more than 2000 pods in production. The prometheus instance (lets call it collector) which scrapes these pods incurs a lot of cross-az data transfer costs which can be reduced if i have one collector prometheus per az federated on to a parent prometheus which scrapes aggregated metrics. The reason i am inerested in this question is because i want my collector prometheus instances to scrape pods in their respective az's only. – Harshal Shah Feb 18 '21 at 22:35
1

I'm not sure it's possible to determine ahead of time where a pod is going to be scheduled.

What you can do is set node affinity on your deployments so that they always deploy to nodes with certain labels.

For e.g, if you label your nodes with their az, then use node affinity to assign your pods to those nodes, you should accomplish what you're trying to do.

https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity

mcfinnigan
  • 11,442
  • 35
  • 28
  • Good idea. But this isn't a perfect workaround since you can have nodes that isn't used just because they are not in the same az and so use only partially your cluster :/ – Noé Mar 13 '20 at 18:58