0

I've built the cluster that has 3 worker nodes and an admin node. The worker nodes have kube-dns and calica deployed and set. Each machine has it's own external IP and associated DNS. I successfully run nginx-ingress-controller and its default 404-endpoint is accessible from the outside.

Now, the problem is that for some reason pods in the workers are not allowed to establish outbound connections. When I shell exec into the pod, I cannot curl, nor ping, even thus network seems to be configured well inside the pod. I tried to examine calico configuration, but it's quite messy and I don't know how it could be wrong. Are there any default calico/k8s settings that forbid outgoing connection from its nodes? Or maybe somebody faced similar issue?

I'll provide log outputs on-demand, as I'm unsure, what information would be precious in examining this issue.

Community
  • 1
  • 1
Michał Bień
  • 69
  • 1
  • 7
  • Apologies if you've already said but have you got ingress resources pointing to services for your pods? You mention nginx ingress controller but not ingress resources or services as far as I can see. – Ryan Dawson Aug 09 '18 at 20:28
  • Yes, the services are established and up, but I cannot access them anyway as I need outcoming connections from pod enabled for my applications to start – Michał Bień Aug 10 '18 at 06:43
  • but I'm sure this is connection problem, as now I also helm-deployed cert-manager and it cannot access acme api for validation with information "host is unreachable" – Michał Bień Aug 10 '18 at 06:44
  • Could you provide configurations for your Deployments and Services? – Artem Golenyaev Aug 10 '18 at 09:34

1 Answers1

2

Thanks for comments, after many hours of investigation, I finally found that the problem was wrongly configured kube-dns. When you deploy kube-dns, it automatically imports nameservers list from your machine /etc/resolv.conf. It works great, unless you have ubuntu with systemd-resolve DNS server installed (and it's installed by default). It works as a proxy DNS server active as address 127.0.0.53, and is inaccesible from inside pods. That's why DNS nameservers were inaccesible even after kube-dns was installed and active.

Workaround for this problem, that I used, is as following:

  1. Check what is the nameserver used by your machine - for me it was in /run/systemd/resolve/resolv.conf

  2. Create new ConfigMap to replace kube-dns's default one, and fill it as follows:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      labels:
        addonmanager.kubernetes.io/mode: EnsureExists
      name: kube-dns
      namespace: kube-system
    data:
      upstreamNameservers: |
        ["Your nameserver address"]
    
  3. Redeploy kube-dns. Your correct DNS should work now

Michał Bień
  • 69
  • 1
  • 7
  • Hi... How did you redeploy kube-dns? Did you first deploy the ConfigMap and then stop the coredns pods? – user3583252 Aug 02 '19 at 07:59
  • 1
    Hi, in my case I was still using old kube-dns, not CoreDNS which is part of default kubernetes setups starting with kubernetes 1.11 Most of the kube-dns problems were solved with coredns AFAIK. If you want to redeploy your CoreDNS installation I suggest following this guide: https://github.com/coredns/deployment/tree/master/kubernetes – Michał Bień Aug 08 '19 at 20:05