0

I "inherited" an unmanaged EKS cluster with two nodegroups created through eksctl with Kubernetes version 1.15. I updated the cluster to 1.17 and managed to create a new nodegroup with eksctl and nodes successfully join the cluster (i had to update aws-cni to 1.6.x from 1.5.x to do so). However the the Classic Load Balancer of the cluster marks my two new nodes as OutOfService.

I noticed the Load Balancer Security Group was missing from my node Security Groups thus i added it to my two new nodes but nothing changed, still the nodes were unreachable from outside the EKS cluster. I could get my nodes change their state to InService by applying the Security Group of my two former nodes but manually inserting the very same inbound/outbound rules seems to sort no effect on traffic. Only the former nodegroup security group seems to work in this case. I reached a dead end and asking here because i can't find any additional information on AWS documentation. Anyone knows what's wrong?

berrur
  • 115
  • 11
  • What if you kubectl delete the service and re-create? – gohm'c Nov 08 '21 at 14:00
  • I would replace the whole cluster. You could build out the replacement, move the workloads, test it, then reroute traffic and retire it. This would give you the least headache down the road. – Curt Eckhart Nov 28 '21 at 22:50

0 Answers0