Tried creating node group inside the EKS cluster. After creating node group, inside add-ons, the core-dns option displays as degraded. Tried all possibilities found on google. Unable to resolve this. Can someone help on this.

- 2,458
- 10
- 44
- 49

- 11
- 3
-
1Can you post the complete output of `kubectl get deploy -n kube-system coredns -o yaml` to you question. – gohm'c Feb 08 '22 at 01:15
-
You can try this out. Might be related to NAT [https://serverfault.com/questions/1077378/aws-eks-add-on-coredns-status-as-degraded-and-node-group-creation-failed-is-una?newreg=41a28d6601b04077bc716fa687a0fd7a](https://serverfault.com/questions/1077378/aws-eks-add-on-coredns-status-as-degraded-and-node-group-creation-failed-is-una?newreg=41a28d6601b04077bc716fa687a0fd7a) – Dame Lyngdoh Jul 28 '22 at 18:12
2 Answers
If you are using the AWS EKS Fargate then, you have to add the Labels while creating the coredns Fargate profile and in this way, the coredns will be able to find the nodes for its deployment and start working. Here the steps:
Let's get the pods available in the kube-system namespace.
kubectl get pods -n kube-system
We can see that the coredns pods are stuck in a pending state.
Let's check why are they stuck in a pending state. (because there are no nodes available on the AWS EKS cluster to deploy coredns pods)
kubectl describe pods [pods_name] -n kube-system
And if we scroll a bit up, we will be able to find the labels section of the particular pod.
- Now we will create a new Fargate pod and include the Label "k8s-app=kube-dns". So that the coredns fargate profile can identify the particular Pods to deploy.
Now we will patch the coredns deployment using the following command:
kubectl patch deployment coredns
-n kube-system
--type json
-p='[{"op": "remove", "path": "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]'
And we can see that our coredns pods have been started and running successfully
.
We can also re-create the existing pods using the following command:
And our CoreDNS is healthy and Active now which was in a degraded state before.
You are now ready to rock with k8s on AWS. Best of Luck.

- 141
- 7
Navigate to:
EKS > Clusters > your_cluster > Add-on: coredns > health-issue
If your health issue is InsufficientNumberOfReplicas
then try changing type of instance. For example if you had selected t3.medium
in node group, then try t2.medium
or any other bigger instance type.
Then verify with kubectl get pods -n=kube-system | grep coredns
whether they are running or not. Doing this solved the problem for me.

- 23,933
- 14
- 88
- 109

- 1
- 1