7

We have deployed a K8S cluster using ACS engine in an Azure public cloud. We are able to create deployments and services but when we enter a pod using "kubectl exec -ti (pod name) (command)" we are receiving the below error,

Error from server: error dialing backend: dial tcp: lookup (node hostname) on 168.63.129.16:53: no such host

I looked all over the internet and performed all I could to fix this issue but no luck so far. The OS is Ubuntu and 168.63.129.16 is a public IP from Azure used for DNS.(refer below link)

https://blogs.msdn.microsoft.com/mast/2015/05/18/what-is-the-ip-address-168-63-129-16/

I've already added host entries to /etc/hosts and entries into resolv.conf of the master/node server and nslookup resolves the same. I've also tested by adding --resolv-conf flag to the kubelet but still it fails. I'm hoping that someone from this community can help us fix this issue.

Leo Lazarus
  • 107
  • 1
  • 7
  • 1
    As far as I know, If you provide limited resourcesQuota to namespaces then you create the number of resources there, for example, deployments, pods, services, PVC etc .Therefore, all of the resourcesQuota are consumed by these k8s resources. Now if you are trying to consume more resource quota by using another api call or request. in your case, you are using "kubectl exec -ti (pod name) (command)" then it will be rejected by Admission Controller. I would suggest you resolve resource quota limitations. You can also dig up with Admission controller further. Perhaps there is a flag to ignore it – Suresh Vishnoi Nov 16 '17 at 20:12
  • Can you run any other `kubectl` commands ? – Rico Nov 16 '17 at 21:06
  • @Rico yeah we are able to run other kubectl commands and they work fine. – Leo Lazarus Nov 17 '17 at 10:38

2 Answers2

2

Verify the node on which your pod is running can be resolved and reached from inside the API server container. If you added entries to /etc/resolv.conf on the master node verify they are visible in the APIserver container, if they are not, restarting the API server pod might be helpful

0

The problem was in VirtualBox layer

sudo ifconfig vboxnet0 up

Solution is taken from here https://github.com/kubernetes/minikube/issues/1224#issuecomment-316411907

itsnikolay
  • 17,415
  • 4
  • 65
  • 64