There is a cluster Kubernetes and IBM Cloud Private with two workers. I have one deployment which creates two pods. How can I force deployment to install its pods on two different workers? In this case if I lost one icp worker I always have other with need pod.
3 Answers
If you want pods to not schedule on the same node, the correct concept that you will want to use is inter-pod anti-affinity. https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature
Observe:
spec:
replicas: 2
selector:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- my-app
topologyKey: kubernetes.io/hostname

- 870
- 1
- 7
- 12
-
Thank you for you idea. Actually, the option `preferredDuringSchedulingIgnoredDuringExecution` is fitted me even more because i can have more pods than workers on cluster. I have only one question. There is field `weight` with two value 1 and 100. Do you know something about this field? – Risha Mar 23 '18 at 16:10
-
This is interesting. The link states that this was introduced in 1.4 and still is marked as beta. Any reasons why it is still in Beta? It also states that this is not recommended for very large clusters (100+ nodes). Though this does not apply to Risha's query (as the query is talking about 2 worker nodes, it is good to be aware of this recommendation). – Manglu Apr 12 '18 at 02:31
You can create your pods as kubernetes DaemonSet. A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. You can access below link to see details. https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/

- 706
- 6
- 3
In addition to @Santiclause answer regarding scheduling policy in affinity mode there are two different mods of affinity:
requiredDuringSchedulingIgnoredDuringExecution
preferredDuringSchedulingIgnoredDuringExecution.
While using requiredDuringSchedulingIgnoredDuringExecution affinity scheduler we need to make sure that all rules are met for a pod to be scheduled.
If you will have i.e. not enough nodes to spawn all pods the scheduler will wait forever until there will be enough nodes available.
If you use preferredDuringSchedulingIgnoredDuringExecution affinity scheduler it will try to spawn all replicas based on the highest score the nodes gets from the combination of defined rules and their weight.
Weight is a parameter used along with a rule, each rule can have a different weight. In order to calculate a Score for a node we use following logic:
For every node, we iterate through rules defined in the configuration (i.e. resource request, requiredDuringScheduling, affinity expressions, etc.). In case the rule is matched we add the weight value to the score for that node. Once all rules for all nodes are processed we will have a list of all nodes with their final score. The node(s) with the highest score are the most preferred.
Just to summarize, higher weight value will increase importance of a rule and will help scheduler to decide which node to choose.

- 2,629
- 13
- 19