I have k8s cluster running on 2 nodes and 1 master in AWS.
When I changed replica of my all replication pods are span on same node. Is there a way to distribute across nodes.?
sh-3.2# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
backend-6b647b59d4-hbfrp 1/1 Running 0 3h 100.96.3.3 node1
api-server-77765b4548-9xdql 1/1 Running 0 3h 100.96.3.1 node2
api-server-77765b4548-b6h5q 1/1 Running 0 3h 100.96.3.2 node2
api-server-77765b4548-cnhjk 1/1 Running 0 3h 100.96.3.5 node2
api-server-77765b4548-vrqdh 1/1 Running 0 3h 100.96.3.7 node2
api-db-85cdd9498c-tpqpw 1/1 Running 0 3h 100.96.3.8 node2
ui-server-84874d8cc-f26z2 1/1 Running 0 3h 100.96.3.4 node1
And when I tried to stop/terminated AWS instance (node-2) pods are in pending state instead of migrating to available node. Can we specify it ??
sh-3.2# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
backend-6b647b59d4-hbfrp 1/1 Running 0 3h 100.96.3.3 node1
api-server-77765b4548-9xdql 0/1 Pending 0 32s <none> <none>
api-server-77765b4548-b6h5q 0/1 Pending 0 32s <none> <none>
api-server-77765b4548-cnhjk 0/1 Pending 0 32s <none> <none>
api-server-77765b4548-vrqdh 0/1 Pending 0 32s <none> <none>
api-db-85cdd9498c-tpqpw 0/1 Pending 0 32s <none> <none>
ui-server-84874d8cc-f26z2 1/1 Running 0 3h 100.96.3.4 node1