I'm running a k8s cluster in AWS and I got the following error after I ran kubectl get cluster-info
:
E0902 13:22:08.516718 897 memcache.go:265] couldn't get current server API group list: Get "https://api.kubevpro.quickesh.com/api?timeout=32s": dial tcp 203.0.113.123:443: i/o timeout
Unable to connect to the server: dial tcp 203.0.113.123:443: i/o timeout
What I have tried
- ran
kops validate cluster --state=s3://ky-bucket-kops --name=kubevpro.quickesh.com
and got the following result:
Validating cluster kubevpro.quickesh.com
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
control-plane-us-east-1a ControlPlane t3.medium 1 1 us-east-1a
nodes-us-east-1a Node t3.small 1 1 us-east-1a
nodes-us-east-1b Node t3.small 1 1 us-east-1b
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
Error: validation failed: cluster not yet healthy
From the anwser for this question: The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's - AWS I think the reason might be that KOPS does not update the public IP for EC2 instances after instances restarting. I did use KOPS for creating the cluster, and I did shut down the instance and restarted it.
So how should I update the public IP address manually in Route53? I tried to find it in route53 dashboard but found nothing.