My k8s cluster initially have 2node and 1master and I deployed statefulset with 3pods, so 3pods with PVC are running on 2 nodes. Now I increased nodes from 2 to 3. So now k8s is 3nodes and 1master. I would like to move one of the statefulset pod to newly added node without deleting PVC so that 3 pods will spread on 3nodes each. I tried deleting pod but it creates on same node and not on new node(which is expected). Can anyone please let me know if it is possible to move one pod to another node with out deleting PVC? is this achievable? or any alternate solution as I do not want to delete PVC.
Asked
Active
Viewed 6,238 times
5
-
share details about pvc is it host's filesystem? – Arghya Sadhu Jun 17 '20 at 03:39
-
Thanks Arghya Sadhu, PVC is AWS EBS volume. K8s is deployed in AWS. – Ram Jun 17 '20 at 03:41
-
Do you have any taints on newly create node or (anti)affinity set in pod specs? – kool Jun 18 '20 at 12:45
-
No, they do not have any taints, but podAntiAffinity is set as "affinity": { "podAntiAffinity": { "preferredDuringSchedulingIgnoredDuringExecution": [ { "weight": 100, "podAffinityTerm": { "labelSelector": { "matchExpressions": [ – Ram Jun 18 '20 at 13:13
-
Could you add `topologyKey` used in podAntiAffinity? I tried to reproduce your issue and everytime I scaled statefulset (either using kubectl `scale` or `patch`) it ended on 3rd node. – kool Jun 19 '20 at 13:26
-
Thanks KFC, I am deviated from this this task because of other priority issues, I will try this. – Ram Jun 22 '20 at 15:44
3 Answers
2
You can force a pod to be started on a different node by cordoning the node that the pod is running on and then redeploying the pod. That way kubernetes has to place it onto a different node. You can uncordon the node afterwards.

Moritur
- 1,651
- 1
- 18
- 31
-
I got an Event: Warning FailedMount Unable to attach or mount volumes: unmounted volumes=[my-pvc], unattached volumes=[my-pvc kube-api-access-skwzd]: timed out waiting for the condition – Mikolaj Feb 07 '22 at 10:41
1
It's not recommended to delete pods of a statefulset. You can scale-down the statefulset to 2 replicas and then scale it up to 3.
kubectl get statefulsets <stateful-set-name>
kubectl scale statefulsets <stateful-set-name> --replicas=<new-replicas>

Arghya Sadhu
- 41,002
- 9
- 78
- 107
-
3I tried this but everytime statefulset create pod on same node which previously it was running on. – Ram Jun 17 '20 at 19:02