1

We are have an eks cluster with version 1.16. I was try to upgrade it to the version 1.17. Since, our entire setup is deployed using terraform, I used the same for the upgrade by using cluster_version = "1.17". The upgrade of EKS control plane worked fine. I also updated kube-proxy,CoreDns and Amazon VPC CNI. But, I am facing an issue with worker nodes. I tried to create a new worker group, the newly created worker nodes got created successfully created in aws. I am also able to see them in the ec2 console. But, the nodes didn´t joined the cluster. I am not able to see newly created worker-nodes when i try the command kubectl get nodes. Can anyone please guide me regarding this issue. Is there any extra setup i need to perform to join the worker-nodes to the cluster.

GPC
  • 381
  • 2
  • 14
  • Have you checked the logs of the nodes? Do they show a reason, why they could not connect to the EKS Control Plane? – Bastian Klein Jul 15 '21 at 15:51
  • @BastianKlein Thank you for the response. I also checked the logs of the ec2. The logs looks good. I see below line in the logs [ 29.382711] cloud-init[3695]: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service. Starting Kubernetes Kubelet... [[32m OK [0m] Started Kubernetes Kubelet. I don´t see any errror – GPC Jul 16 '21 at 09:15
  • make sure service account role for addon Amazon VPC CNI is bind to IAM role with managed policy `AmazonEKS_CNI_Policy` – prasun Jul 17 '21 at 21:01
  • Try steps from https://stackoverflow.com/questions/73868433/worker-node-group-doesnt-join-the-eks-cluster/74278536#74278536 – Vaibhav Fouzdar Nov 01 '22 at 19:25

0 Answers0