2

I use kubernetes v12, my system is ubuntu 16.

I use the followed command to create DNS resource.

wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed

wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/deploy.sh
bash deploy.sh -i 10.32.0.10 -r "10.32.0.0/24" -s -t coredns.yaml.sed | kubectl apply -f -

After created coredns resource: I check the resources status.

  1. check coredns service
root@master:~# kubectl get svc -n kube-system
NAME           TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
calico-typha   ClusterIP   10.32.0.10   <none>        5473/TCP   13h
  1. check code DNS pod endpoints
root@master:~# kubectl get ep -n kube-system
NAME                      ENDPOINTS   AGE
calico-typha              <none>      13h
kube-controller-manager   <none>      18d
kube-scheduler            <none>      18d
  1. My DNS config:
root@master:~# cat /etc/resolv.conf
nameserver 183.60.83.19
nameserver 183.60.82.98
  1. Check CoreDNS pod logs
root@master:~# kubectl get po -n kube-system | grep coredns-7bbd44c489-5thlj
coredns-7bbd44c489-5thlj   1/1     Running   0          13h
root@master:~#
root@master:~# kubectl logs -n kube-system pod/coredns-7bbd44c489-5thlj
.:53
2019-03-16T01:37:14.661Z [INFO] CoreDNS-1.2.6
2019-03-16T01:37:14.661Z [INFO] linux/amd64, go1.11.2, 756749c
CoreDNS-1.2.6
linux/amd64, go1.11.2, 756749c
 [INFO] plugin/reload: Running configuration MD5 = 2e2180a5eeb3ebf92a5100ab081a6381
 [ERROR] plugin/errors: 2 526217177044940556.1623766979909084596. HINFO: unreachable backend: read udp 10.200.0.93:45913->183.60.83.19:53: i/o timeout
 [ERROR] plugin/errors: 2 526217177044940556.1623766979909084596. HINFO: unreachable backend: read udp 10.200.0.93:42500->183.60.82.98:53: i/o timeout
 [ERROR] plugin/errors: 2 526217177044940556.1623766979909084596. HINFO: unreachable backend: read udp 10.200.0.93:48341->183.60.82.98:53: i/o timeout
 [ERROR] plugin/errors: 2 526217177044940556.1623766979909084596. HINFO: unreachable backend: read udp 10.200.0.93:33007->183.60.83.19:53: i/o timeout
 [ERROR] plugin/errors: 2 526217177044940556.1623766979909084596. HINFO: unreachable backend: read udp 10.200.0.93:52968->183.60.82.98:53: i/o timeout
 [ERROR] plugin/errors: 2 526217177044940556.1623766979909084596. HINFO: unreachable backend: read udp 10.200.0.93:48992->183.60.82.98:53: i/o timeout
 [ERROR] plugin/errors: 2 526217177044940556.1623766979909084596. HINFO: unreachable backend: read udp 10.200.0.93:35016->183.60.83.19:53: i/o timeout
 [ERROR] plugin/errors: 2 526217177044940556.1623766979909084596. HINFO: unreachable backend: read udp 10.200.0.93:58058->183.60.82.98:53: i/o timeout
 [ERROR] plugin/errors: 2 526217177044940556.1623766979909084596. HINFO: unreachable backend: read udp 10.200.0.93:51709->183.60.83.19:53: i/o timeout
 [ERROR] plugin/errors: 2 526217177044940556.1623766979909084596. HINFO: unreachable backend: read udp 10.200.0.93:53889->183.60.82.98:53: i/o timeout
root@master:~#

I found CoreDNS pod ip cannot connected to node DNS server ip address.

yang yang
  • 39
  • 1
  • 1
  • 4

3 Answers3

0

You should check calico firewall policy if it block internet access from pod. Another idea that you need to check which mode calico using: ipip / nat out going

nct
  • 325
  • 3
  • 5
0

You are lacking kube-dns service.

Input variable -i in deploy.sh sets IP for kube-dns service, and in your example, 10.32.0.10 is already assigned to calico-typha, you need to choose a different IP. Moreover, it should be in a valid range, but kubectl will prompt if it won't be.

You can always check it by running kubectl cluster-info dump | grep service-cluster-ip-range.

MWZ
  • 1,224
  • 7
  • 11
0

Upon seeing these coredns issues, I was thinking this was a coredns/dns/resolv.conf issue. But was only able to find a solution when I found that my pods all seemed to not have internet access and I began thinking more than kube-proxy was involved.

I turned to the iptables to see if there would anything blocking access and look to view the 10.96.0.10 iptables rules applied. I didnt find any iptables rules in my iptables (nft) but did find some in my iptables-legacy (Debian 10). I blamed calico and started my kubernetes cluster from scratch.

kubeadm reset -f
rm -rf /etc/cni /etc/kubernetes /var/lib/dockershim /var/lib/etcd /var/lib/kubelet /var/run/kubernetes ~/.kube/*
iptables -F && iptables -X
iptables -t raw -F && iptables -t raw -X
iptables -t mangle -F && iptables -t mangle -X
iptables -t nat -F && iptables -t nat -X

iptables-legacy -F && iptables-legacy -X
iptables-legacy -t raw -F && iptables-legacy -t raw -X
iptables-legacy -t mangle -F && iptables-legacy -t mangle -X
iptables-legacy -t nat -F && iptables-legacy -t nat -X

systemctl restart docker

To delete and restart.

I started my kube cluster via sudo kubeadm init --config CLUSTER.yaml --upload-certs

Checked iptables for nothing to be in iptables-legacy (my default was iptables nft)

Pulled calico locally and added:

            - name: FELIX_IPTABLESBACKEND
              value: "NFT"

Also, if you are using a different podsubnet set in your CLUSTER.yaml update the CALICO_IPV4POOL_CIDR approriately in your calico file.

Once you get kubectl working via copying the proper kube config

kubectl apply -f calico.yaml

Apply the updated file. Double check iptables again. And you should then be able to add your control-plane and worker nodes via the command that the original kubeadm init outputted.

Dharman
  • 30,962
  • 25
  • 85
  • 135