Is there any way to convert/change multi-master nodes(3 masters, HA & LB) to single master in stacked etcd configuration?
In 3 master nodes, it only tolerates 1 failure right? So if 2 of these master node goes down, the control plane wouldn't work.
what I need to do is convert these 3masters to a single master? is there any way to do this to minimalize the downtime of the control plane? (in case the other 2 masters need some time to turn back on)
the test I've done: I've tried to restore etcd snapshot to a fully different environment with a new setup of 1 master & 2 workers, and it seems to work fine.. the status of the other 2 master nodes is not ready, 2 worker node is ready, and request to api-server is working normally.
But, if I restore etcd snapshot to the original environment.. after resetting the last master node with kubeadm reset, the cluster seems to be broken.. the status of 2 workers is not ready, seems like it has different certificates.
any suggestion on how to make this works?
UPDATE: so apparently, I could restore etcd snapshot directly without doing "kubeadm reset", even if doing reset.. as long as we update the certificates, the cluster should be restored successfully.
BUT now I run into a different issue, after restoring the etcd snapshot.. everything works fine, so basically I want to add a new Control Plane to this Cluster, the current node status is: master1 ready
master2 not-ready
master3 not-ready
before I add new CP, I removed the 2 failed master node from the cluster. after removing it I tried to join new CP to cluster, and the join process stuck at :
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s [kubelet-check] Initial timeout of 40s passed.
the original master node is broken again, now I can't access the api-server. do you guys have any idea what's going wrong?