I deployed eks-cluster with two nodes in the same subnet.
kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-xx-xx.xx-xx-xx.compute.internal Ready <none> 6h31m v1.22.9-eks-xxxx
ip-172-31-xx-xx.xx-xxx-x.compute.internal Ready <none> 6h31m v1.22.9-eks-xxxx
Everything worked fine. I wanted to configure a NAT-gateway for the subnet in which nodes are present.
Once the NAT-gateway is configured all of a sudden all the nodes went to NotReady state.
kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-xx-xx.xx-xx-xx.compute.internal NotReady <none> 6h45m v1.22.9-eks-xxxx
ip-172-31-xx-xx.xx-xxx-x.compute.internal NotReady <none> 6h45m v1.22.9-eks-xxxx
kubectl get events
also show that the nodes are NotReady
. I am not able to exec into the pod as well.
when i try kubectl exec
i get error: unable to upgrade connection: Unauthorized
.
Upon removing my subnet from the associate-subnets in route-table(as part of creating the nat-gateway) everything worked fine and nodes went into ready state.
Any idea as to how to create NAT gateway for eks worker nodes? Is there anything i am missing
Thanks in advance