0

I have been trying to upgrade Kubernetes bare metal cluster (3 masters, 3 workers from version 1.21.2 to 1.22.6) and was successful in upgrading all the master nodes.

Coming to the first worker node, I upgraded kubeadm and then trying to execute kubeadm upgrade node, which fails:

root@worker01:~# kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
unable to fetch the kubeadm-config ConfigMap: failed to getAPIEndpoint: could not retrieve API endpoints for node "worker01" using pod annotations: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher
root@worker01:~#

I was expecting it to just see that this is not a control plane node and just skip the upgrade and proceed with kubelet configuration updation.

Has anyone seen this error before and know what to do to solve this ?

Looking at the error, is it that kubeadm is unable to recognize this node as a worker node ?kubectl get nodes shows all master and worker nodes in ready state.

kubeadm upgrade node completed successfully on the other 2 worker nodes.

Any help will be highly appreciated.

Thanks and regards, Sivarama Raju P

  • sounds like a configurtion issue, maybe in /etc/kubernetes/admin.conf on whichever worker you could not upgrade? Try to compare configs in between worker nodes, there's probably some difference - beyond your upgrade. Also: check for proxies, maybe? – SYN Oct 17 '22 at 21:33

0 Answers0