3

one of my node went to not ready state. so I tried deleting and recreating them again. But now my kubectl is not reflecting the restarted node. but kops says the node is ready and I can see the node in aws interface too. looks like kubectl is not getting updated

kops get instancegroups --name kubernetes.xxxxxx.xxx --state s3://kops-state-xxxxxxxx

NAME                ROLE    MACHINETYPE MIN MAX ZONES
master-ap-south-1a  Master  t2.micro    1   1   ap-south-1a
nodes               Node    t2.micro    2   2   ap-south-1a

In kubectl:

kubectl get nodes
NAME                                           STATUS         AGE       VERSION
ip-xxx-xx-xx-xxx.ap-south-1.compute.internal   Ready,node     32d       v1.8.7
ip-xxx-xx-xx-xxx.ap-south-1.compute.internal    Ready,master   32d       v1.8.7
Gaudam Thiyagarajan
  • 1,022
  • 9
  • 24
  • 1
    Have you checked kubelet logs on the new node? Sounds like the node is unable to register with the Kubernetes API. Would be nice to include some relevant parts from the log in your post :-) – embik May 12 '18 at 16:13
  • I guess it must have been some caching issue. looks like its reflecting now. – Gaudam Thiyagarajan May 14 '18 at 05:26

0 Answers0