2

In my pods I cannot reach external hosts. In my case this would be https://login.microsoftonline.com.

I've been following the debugging DNS problems section on https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/, but the lack of knowledge about Kubernetes hinders me to apply the instructions given.

doing a local lookup works fine:

microk8s kubectl exec -i -t dnsutils -- nslookup kubernetes.default
Server:         10.152.183.10
Address:        10.152.183.10#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.152.183.1

However, trying to reach any external domain fails:

microk8s kubectl exec -i -t dnsutils -- nslookup stackoverflow.com
Server:         10.152.183.10
Address:        10.152.183.10#53

** server can't find stackoverflow.com.internal-domain.com: SERVFAIL

command terminated with exit code 1

The known issues section has the following paragraph:

Some Linux distributions (e.g. Ubuntu) use a local DNS resolver by default (systemd-resolved). Systemd-resolved moves and replaces /etc/resolv.conf with a stub file that can cause a fatal forwarding loop when resolving names in upstream servers. This can be fixed manually by using kubelet's --resolv-conf flag to point to the correct resolv.conf (With systemd-resolved, this is /run/systemd/resolve/resolv.conf). kubeadm automatically detects systemd-resolved, and adjusts the kubelet flags accordingly.

Given, that the microk8s instance is running on Ubuntu, this might be worth investigating, but I have no idea where and how to apply that --resolv-conf flag.

I am grateful for any hints on how I can track down this issue, since DNS including nslookup, traceroute et al is working flawlessly on the host system.


Update /etc/resolv.conf

nameserver 127.0.0.53
options edns0 trust-ad
search internal-domain.com

And that is the /etc/resolv.conf from within the dnsutils pod:

search default.svc.cluster.local svc.cluster.local cluster.local internal-domain.com
nameserver 10.152.183.10
options ndots:5

configMap:

 Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        log . {
          class error
        }
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . 8.8.8.8 8.8.4.4
        cache 30
        loop
        reload
        loadbalance
    }
Marco
  • 22,856
  • 9
  • 75
  • 124
  • Just to be sure do you have your [dns addon](https://microk8s.io/docs/addon-dns) enabled? Can you try for testing purposes use this command `sudo iptables -P FORWARD ACCEPT` and check if its working afterwards? You mentioned that you checked dns troubleshooting. Have you checked coredns logs? Can you paste them? – acid_fuji Jan 05 '21 at 09:14
  • already done that and CoreDNS is enabled – Marco Jan 05 '21 at 09:14
  • What about the rest? Does iptables changes anything? Can you also run `microk8s inspect` command place the output? Have you perform any changes recently? Lastly, can you try to to restart microk8s (microk8s stop, then microk8s start). – acid_fuji Jan 05 '21 at 09:47
  • Yes I followed the instructions in the documentation. It did not result in any measureables difference – Marco Jan 05 '21 at 09:50
  • Notice how the error message says it can't resolve `stackoverflow.com.internal-domain.com`. Try with `stackoverflow.com.` with a dot at the end instead. But really, we have no idea what is at 127.0.0.53 or how, if at all, it is able to resolve external domains. – tripleee Jan 05 '21 at 10:19
  • that should be systemd-resolved. Ubuntus DNS service – Marco Jan 05 '21 at 10:20
  • Marco, does your command `kubectl exec -ti dnsutils -- cat /etc/resolv.conf` shows exact same config like the node one? – acid_fuji Jan 05 '21 at 10:30
  • Can you try to add the required flag to kubelet: `echo "--resolv-conf=/run/systemd/resolve/resolv.conf" >> /var/snap/microk8s/current/args/kubelet` and then `sudo service snap.microk8s.daemon-kubelet restart` – acid_fuji Jan 05 '21 at 10:33
  • @thomas no, they differ. – Marco Jan 05 '21 at 10:37
  • Can you try to append the kubelet config like I mentioned above and restart your pod and then try again? Also please have a lot at the coredns configmap and update the results in the question (`microk8s kubectl describe configmap -n kube-system coredns`) . – acid_fuji Jan 05 '21 at 10:42
  • Already ahead of you :) Unfortunately no changes. Internal DNS ok, external DNS, not so much. – Marco Jan 05 '21 at 10:44
  • Any luck with that configmap? Where are running this VM? Cloud? – acid_fuji Jan 05 '21 at 10:53
  • No luck with anything I've tried or you've suggested. It's a physical box in a data center. – Marco Jan 05 '21 at 11:04
  • Please read my comments once again. Ive asked to describe coredns configmap and update results into the question. It very hard to debug something without full context. I`m try to find where the problem lies. I just deployed microk8s and it works fine for me on ubuntu. – acid_fuji Jan 05 '21 at 11:15

2 Answers2

0

In the end I could not figure out, what the reason was for this behaviour, so I did a full reset of the node:

microk8s reset
sudo snap remove microk8s
sudo snap install microk8s --classic --channel=1.19

Followed by the remaining instructions to configure secrets et al.

Marco
  • 22,856
  • 9
  • 75
  • 124
0

Change forward . 8.8.8.8 8.8.4.4 to forward . /etc/resolv.conf

Sekru
  • 515
  • 2
  • 11