15

I'm running Kubernetes 1.11, and trying to configure the Kubernetes cluster to check a local name server first. I read the instructions on the Kubernetes site for customizing CoreDNS, and used the Dashboard to edit the system ConfigMap for CoreDNS. The resulting corefile value is:

.:53 {
    errors
    health
    kubernetes cluster.local in-addr.arpa ip6.arpa {
       pods insecure
       upstream    192.168.1.3 209.18.47.61
       fallthrough in-addr.arpa ip6.arpa
    }
    prometheus :9153
    proxy . /etc/resolv.conf
    cache 30
    reload
}

You can see the local address as the first upstream name server. My problem is that this doesn't seem to have made any impact. I have a container running with ping & nslookup, and neither will resolve names from the local name server.

I've worked around the problem for the moment by specifying the name server configuration in a few pod specifications that need it, but I don't like the workaround.

How do I force CoreDNS to update based on the changed ConfigMap? I can see that it is a Deployment in kube-system namespace, but I haven't found any docs on how to get it to reload or otherwise respond to a changed configuration.

E. Wittle
  • 299
  • 2
  • 4
  • 11

3 Answers3

31

One way to apply Configmap changes would be to redeploy CoreDNS pods:

kubectl rollout restart -n kube-system deployment/coredns
Viliam Pucik
  • 479
  • 4
  • 3
  • This worked for me. Editing the `Deployment/coredns` object did not work, and killing the individual pods didn't work either (surprising) on Amazon EKS. – Trevor Sullivan Mar 09 '22 at 02:08
  • It worked for me, all the DNS resolution was blocked and returned a timeout. – Ainokila Jun 13 '23 at 15:20
18

You can edit it in command line:

kubectl edit cm coredns -n kube-system

Save it and exit, which should reload it.

If it will not reload, as Emruz Hossain advised delete coredns:

kubectl get pods -n kube-system -oname |grep coredns |xargs kubectl delete -n kube-system

Crou
  • 10,232
  • 2
  • 26
  • 31
  • I have the same issue, on EKS. Deleting the pods didn't help, not even terminating all nodes with all pods. I also tried to apply kube-dns ConfigMap with upstreamNameservers instead, as suggested here: https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/ none of these helped. – Johnathan Kanarek Jun 02 '19 at 20:34
  • As of `coredns:v1.8.4-eksbuild.1`, I did `kubectl edit cm coredns -n kube-system` and had to wait 60 seconds (even when the it says `cache 30`) before the changes took effect. There was no need to reload or delete the pods. – Titi Wangsa bin Damhore Sep 28 '21 at 06:36
3

The coredns will reload itself after a period of 30 to 45 secs as you have specified the 'reload' configuration in the configMap. https://coredns.io/plugins/reload/

If you wish to restart directly after making changes in the configMap then you can either delete all the pods or do a rolling restart.