0

I am running Kubernetes local cluster with using ./hack/local-up-cluster.sh script. Now, when my firewall is off, all the containers in kube-dns are running:

```

# cluster/kubectl.sh get pods --all-namespaces
NAMESPACE     NAME                       READY     STATUS    RESTARTS   AGE
kube-system   kube-dns-73328275-87g4d   3/3       Running   0          45s

```

But when firewall is on, I can see only 2 containers are running:

```

 # cluster/kubectl.sh get pods --all-namespaces
NAMESPACE     NAME                       READY     STATUS    RESTARTS   AGE
kube-system   kube-dns-806549836-49v7d   2/3       Running   0          45s

```

After investigating in details, turns out the pod is failing becase dnsmasq container is not running:

```

7m      7m      1   kubelet, 127.0.0.1  spec.containers{dnsmasq}    Normal      Killing         Killing container with id docker://41ef024a0610463e04607665276bb64e07f589e79924e3521708ca73de33142c:pod "kube-dns-806549836-49v7d_kube-system(d5729c5c-24da-11e7-b166-52540083b23a)" container "dnsmasq" is unhealthy, it will be killed and re-created.

```

Can you help me with how do I run dnsmasq container with firewall on, and what exactly would I need to change? TIA.

Turns out my kube-dns service has no endpoints, any idea why that is?

Pensu
  • 3,263
  • 10
  • 46
  • 71
  • can you provide logs of the failing container ? `$ kubectl logs kube-dns-v20-p299t dnsmasq --namespace kube-system` (**NOTE**: in the above command replace the pod name with the pod name in your cluster) – surajd Apr 26 '17 at 07:08

1 Answers1

1

You can turn off iptables( iptables -F ) before starting your cluster, it can solve your problems.

Suraj Narwade
  • 207
  • 2
  • 6