1

Tried creating node group inside the EKS cluster. After creating node group, inside add-ons, the core-dns option displays as degraded. Tried all possibilities found on google. Unable to resolve this. Can someone help on this.

ouflak
  • 2,458
  • 10
  • 44
  • 49
  • 1
    Can you post the complete output of `kubectl get deploy -n kube-system coredns -o yaml` to you question. – gohm'c Feb 08 '22 at 01:15
  • You can try this out. Might be related to NAT [https://serverfault.com/questions/1077378/aws-eks-add-on-coredns-status-as-degraded-and-node-group-creation-failed-is-una?newreg=41a28d6601b04077bc716fa687a0fd7a](https://serverfault.com/questions/1077378/aws-eks-add-on-coredns-status-as-degraded-and-node-group-creation-failed-is-una?newreg=41a28d6601b04077bc716fa687a0fd7a) – Dame Lyngdoh Jul 28 '22 at 18:12

2 Answers2

1

If you are using the AWS EKS Fargate then, you have to add the Labels while creating the coredns Fargate profile and in this way, the coredns will be able to find the nodes for its deployment and start working. Here the steps:

  1. Let's get the pods available in the kube-system namespace.

    kubectl get pods -n kube-system

We can see that the coredns pods are stuck in a pending state. Stuck in Pending state

  1. Let's check why are they stuck in a pending state. (because there are no nodes available on the AWS EKS cluster to deploy coredns pods)

    kubectl describe pods [pods_name] -n kube-system

Issue found

And if we scroll a bit up, we will be able to find the labels section of the particular pod.

Get the labels information

  1. Now we will create a new Fargate pod and include the Label "k8s-app=kube-dns". So that the coredns fargate profile can identify the particular Pods to deploy. Add the Labels to the coredns fargate profile

coredns fargate profile is now added

  1. Now we will patch the coredns deployment using the following command:

    kubectl patch deployment coredns
    -n kube-system
    --type json
    -p='[{"op": "remove", "path": "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]'

Coredns patched successfully

And we can see that our coredns pods have been started and running successfully . Coredns pods working perfectly

  1. We can also re-create the existing pods using the following command:

    kubectl rollout restart -n kube-system deployment coredns coredns restarted

And our CoreDNS is healthy and Active now which was in a degraded state before. Coredns addon is active now

You are now ready to rock with k8s on AWS. Best of Luck.

0

Navigate to:

EKS > Clusters > your_cluster > Add-on: coredns > health-issue

If your health issue is InsufficientNumberOfReplicas then try changing type of instance. For example if you had selected t3.medium in node group, then try t2.medium or any other bigger instance type.

Then verify with kubectl get pods -n=kube-system | grep coredns whether they are running or not. Doing this solved the problem for me.

Stephen Ostermiller
  • 23,933
  • 14
  • 88
  • 109