1

I have set of pods belonging to different deployments. All are configured to run as single replicas. Further, i have 2 nodes in my cluster. Now, when i try to schedule my pods, all pods get deployed to same node. its very rare that i see my pods are going to another node.

Due to this, my one node is always under memory pressure with utilization near 90% and other node with utilization near 30%. Due to this, if my pods try to consume more than 80% of their limits, they are killed by k8s by saying node does not have enough resources.

How can i spread my pods equally across the nodes? or what could be possibly wrong with my cluster? I have read through topology spread constraints but they only talk about spreading pods belonging to one deployment.

Harsh Manvar
  • 27,020
  • 6
  • 48
  • 102
Manish Bansal
  • 2,400
  • 2
  • 21
  • 37

2 Answers2

1

You are right topology spread constraints is good for one deployment. There could be many reasons behind that behavior of Kubernetes.

One could be like you have set the Resource request & limit which K8s think is fine to Run both on Single Node so it's scheduling both pods on the same Node. Or you have not at all set anything which could be another reason too.

Try increasing the request & limit of Deployment you will the scheduling diff.

Meanwhile, you can also use Affinity (Node/POD affinity) or Taints & Toleration is also a good option to separate out the PODs on different available nodes. Affinity will work across deployment also.

Ref : https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/

apiVersion: v1
kind: Pod
metadata:
  name: with-node-affinity
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: topology.kubernetes.io/zone
            operator: In
            values:
            - zone-east1
  containers:
  - name: with-node-affinity
    image: registry.k8s.io/pause:2.0
Harsh Manvar
  • 27,020
  • 6
  • 48
  • 102
0

you can use descheduler together with LowNodeUtilization policy to attempt to balance pods out: https://github.com/kubernetes-sigs/descheduler#lownodeutilization

alternatively you can try and use pod anti-affinity and make all pods "hate" each other, that would help with scheduling more evenly.

4c74356b41
  • 69,186
  • 6
  • 100
  • 141