0

Sometimes kubectl get pod some-pod-1234abc returns an error like: Error from server (NotFound): pods "ip-192-168-55-196.us-east-1.compute.internal" not found. This is surprising because the error references a node, not a pod name. This happens very rarely, and seems to be (I've yet to verify this with certainty) happening only for pods that were recently deleted.

What conditions could cause this to happen? this is a Kubernetes 1.20 cluster in AWS EKS, using Spot instances. I am not concerned with the pods being deleted, but am trying to understand why the node name is returned by kubectl in the message, instead of the pod name.

ebr
  • 103
  • 4
  • You can try dialing up the verbosity of kubectl to see what, exactly, it is requesting. Usually the mention of Node names when trying to interact with Pod resources is for the logs, since it needs to contact kubelet on port 10254 to obtain the logs, but I don't recall ever having seen that behavior with just a `get pods` operation – mdaniel Dec 13 '21 at 17:05
  • Hello @ebr. Any updates? – Wytrzymały Wiktor Dec 22 '21 at 12:36
  • @WytrzymałyWiktor check Rajesh Dutta's answer below. It makes sense that the pod gets a DNS entry that *looks* like a node name, but in fact is simply a standard internal DNS name used by AWS networking – ebr Dec 26 '21 at 20:58

1 Answers1

2

This is expected behavior. That is how a Pod is registered in DNS(AAAA record).

Syntax: pod-ip-address.namespace.pod.cluster

In my understanding:

Pod

  • ip-address = ip-192-168-55-196
  • namespace = us-east-1
  • Cluster = compute.internal

Check this link.

Bazhikov
  • 138
  • 3
Rajesh Dutta
  • 306
  • 1
  • 5