0

I have a problem with my Kubernetes on-premise cluster created with kubespray. I don't know why, but the dns resolutions is not working on one of my three nodes.

I try to mount a debug container to launch a simple nslookup inside, and only from the problematic node, I got timeout :

root@dns-test:/# nslookup kubernetes.default.svc.cluster.local 10.233.0.3
;; connection timed out; no servers could be reached

I am a bit lost, I check the /etc/resolv.conf and it is the same on all servers. I try to disable the firewall on the node itself, but same problems.

Someone have an idea on it ? Thank you in advance!

1 Answers1

0

Not quite understandable problem, you need to see if the problem affects an individual pod or the entire cluster. Run a temporary pod in the namespace of the problem node and try to resolve DNS from there. This will help determine if the problem is specific to that node or if it affects the entire cluster:

kubectl run dns-test --image=busybox:1.33 --restart=Never --rm -it -- nslookup kubernetes.default.svc.cluster.local
VSYS Host
  • 11
  • 1