I created a k8s installed by k0s on the aws ec2 instance. In order to make delivery new cluster faster, I try to make an AMI for it.
However, I started a new ec2 instance, the internal IP changed and the node become NotReady
ubuntu@ip-172-31-26-46:~$ k get node
NAME STATUS ROLES AGE VERSION
ip-172-31-18-145 NotReady <none> 95m v1.21.1-k0s1
ubuntu@ip-172-31-26-46:~$
Would it be possible to reconfigure it ?
Work around
I found a work around to make the AWS AMI working
Short answer
- install node with kubelet's
--extra-args
- update the kube-api to the new IP and restart the kubelet
Details :: 1
In the kubernete cluster, the kubelet
plays the node agent node. It will tell kube-api
"Hey, I am here and my name is XXX".
The name of a node is its hostname and could not be changed after created. It could be set by --hostname-override
.
If you don't change the node name, the kube-api
will try to use the hostname then got errors caused by old-node-name
not found.
Details :: 2
To k0s, it put kubelet' KUBECONFIG in the /var/lib/k0s/kubelet.conf
, there was a kubelet api server location
server: https://172.31.18.9:6443
In order to connect a new kube-api location, please update it