0

I deployed eks-cluster with two nodes in the same subnet.

kubectl get nodes
NAME                                          STATUS   ROLES    AGE     VERSION
ip-172-31-xx-xx.xx-xx-xx.compute.internal   Ready    <none>   6h31m   v1.22.9-eks-xxxx
ip-172-31-xx-xx.xx-xxx-x.compute.internal    Ready    <none>   6h31m   v1.22.9-eks-xxxx

Everything worked fine. I wanted to configure a NAT-gateway for the subnet in which nodes are present.

Once the NAT-gateway is configured all of a sudden all the nodes went to NotReady state.

kubectl get nodes
NAME                                          STATUS   ROLES    AGE     VERSION
ip-172-31-xx-xx.xx-xx-xx.compute.internal   NotReady    <none>   6h45m   v1.22.9-eks-xxxx
ip-172-31-xx-xx.xx-xxx-x.compute.internal    NotReady    <none>   6h45m   v1.22.9-eks-xxxx

kubectl get events also show that the nodes are NotReady. I am not able to exec into the pod as well.

when i try kubectl exec i get error: unable to upgrade connection: Unauthorized.

Upon removing my subnet from the associate-subnets in route-table(as part of creating the nat-gateway) everything worked fine and nodes went into ready state.

Any idea as to how to create NAT gateway for eks worker nodes? Is there anything i am missing

Thanks in advance

user3398900
  • 795
  • 2
  • 13
  • 31

1 Answers1

0

I used eksctl to deploy the cluster using the following command

eksctl create cluster
    --name test-cluster \
    --version 1.22 \
    --nodegroup-name test-kube-workers \
    --node-type t3.medium \
    --nodes 2 \
    --nodes-min 1 \
    --nodes-max 2 \
    --node-private-networking \
    --ssh-access

and everything has been taken care off

user3398900
  • 795
  • 2
  • 13
  • 31