4

I am trying to get dns pod name resolution working on my EKS Kubernetes cluster v1.10.3. My understanding is that creating a headless service will create the necessary pod name records I need but I'm finding this is not true. Am I missing something?

Also open to other ideas on how to get this working. Could not find alternate solution.

Adding update

I wasn't really clear enough. Essentially what I need is to resolved as such: worker-767cd94c5c-c5bq7 -> 10.0.10.10 worker-98dcd94c5d-cabq6 -> 10.0.10.11 and so on....

I don't really need a round robin DNS just read somewhere that this could be a work around. Thanks!

# my service
apiVersion: v1
kind: Service
metadata:
  ...
  name: worker
  namespace: airflow-dev
  resourceVersion: "374341"
  selfLink: /api/v1/namespaces/airflow-dev/services/worker
  uid: 814251ac-acbe-11e8-995f-024f412c6390
spec:
  clusterIP: None
  ports:
  - name: worker
    port: 8793
    protocol: TCP
    targetPort: 8793
  selector:
    app: airflow
    tier: worker
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}





# my pod
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: 2018-08-31T01:39:37Z
  generateName: worker-69887d5d59-
  labels:
    app: airflow
    pod-template-hash: "2544381815"
    tier: worker
  name: worker-69887d5d59-6b6fc
  namespace: airflow-dev
  ownerReferences:
  - apiVersion: extensions/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: worker-69887d5d59
    uid: 16019507-ac6b-11e8-995f-024f412c6390
  resourceVersion: "372954"
  selfLink: /api/v1/namespaces/airflow-dev/pods/worker-69887d5d59-6b6fc
  uid: b8d82a6b-acbe-11e8-995f-024f412c6390
spec:
  containers:
  ...
  ...
    name: worker
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
      ...
      ...
  dnsPolicy: ClusterFirst
  nodeName: ip-10-0-1-226.us-west-2.compute.internal
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: airflow
  serviceAccountName: airflow
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
    ...
    ...
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2018-08-31T01:39:37Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2018-08-31T01:39:40Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2018-08-31T01:39:37Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  ...
  ...
    lastState: {}
    name: worker
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2018-08-31T01:39:39Z
  hostIP: 10.0.1.226
  phase: Running
  podIP: 10.0.1.234
  qosClass: BestEffort
  startTime: 2018-08-31T01:39:37Z





# querying the service dns record works!
airflow@worker-69887d5d59-6b6fc:~$ nslookup worker.airflow-dev.svc.cluster.local
Server:   172.20.0.10
Address:  172.20.0.10#53

Name: worker.airflow-dev.svc.cluster.local
Address: 10.0.1.234





# querying the pod name does not work :(
airflow@worker-69887d5d59-6b6fc:~$ nslookup worker-69887d5d59-6b6fc.airflow-dev.svc.cluster.local
Server:   172.20.0.10
Address:  172.20.0.10#53

** server can't find worker-69887d5d59-6b6fc.airflow-dev.svc.cluster.local: NXDOMAIN

airflow@worker-69887d5d59-6b6fc:~$ nslookup worker-69887d5d59-6b6fc.airflow-dev.pod.cluster.local
Server:   172.20.0.10
Address:  172.20.0.10#53

*** Can't find worker-69887d5d59-6b6fc.airflow-dev.pod.cluster.local: No answer
sebastian
  • 2,008
  • 4
  • 31
  • 49
  • Is there a specific reason why service DNS would not work for you? – yosefrow Aug 31 '18 at 03:22
  • @yosefrow Apologies, I wasn't really clear enough. Essentially what I need is to resolved as such: worker-767cd94c5c-c5bq7 -> 10.0.10.10 worker-98dcd94c5d-cabq6 -> 10.0.10.11 and so on.... I don't really need a round robin DNS just read somewhere that this could be a work around. Thanks! – sebastian Aug 31 '18 at 15:24
  • Although it is true that services support round robin, is there a reason why you must refer to the pod by the exact pod name and not by a generic service name which is mapped to a specific pod via label selectors on a 1 to 1 basis? – yosefrow Sep 01 '18 at 23:05
  • @sebastian were you able to get this to work? i.e. dns resolution of pod name? There are some apache projects that try to resolve pod names and run into this same problem. – victtim Jan 29 '19 at 11:38
  • @sebastian Hi, did my answer help? I noticed you did not accept an answer yet. Is there some way I can improve my answer to fit your specific scenario? – yosefrow Jun 02 '20 at 09:44

1 Answers1

4

Internally, I suggest using the service DNS records to point to the pod, which you already confirmed works. This of course does not require you to have a Headless service to use service DNS.

The kube-dns automatic records work in the following way:

pod -> service in the same namespace: curl http://servicename

pod -> service in a different namespace: curl http://servicename.namespace

Read more about service discovery here: https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables

You can read more about DNS records for services here https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services

If you need custom name resolution externally I recommend using nginx-ingress:

https://github.com/helm/charts/tree/master/stable/nginx-ingress https://github.com/kubernetes/ingress-nginx

EDIT: Include details about actual pod DNS

v1.2 introduces a beta feature where the user can specify a Pod annotation, pod.beta.kubernetes.io/subdomain, to specify the Pod's subdomain. The final domain will be "...svc.". For example, a Pod with the hostname annotation set to "foo", and the subdomain annotation set to "bar", in namespace "my-namespace", will have the FQDN "foo.bar.my-namespace.svc.cluster.local"

A Records and hostname based on Pod's hostname and subdomain fields Currently when a pod is created, its hostname is the Pod's metadata.name value.

With v1.2, users can specify a Pod annotation, pod.beta.kubernetes.io/hostname, to specify what the Pod's hostname should be. The Pod annotation, if specified, takes precedence over the Pod's name, to be the hostname of the pod. For example, given a Pod with annotation pod.beta.kubernetes.io/hostname: my-pod-name, the Pod will have its hostname set to "my-pod-name".

With v1.3, the PodSpec has a hostname field, which can be used to specify the Pod's hostname. This field value takes precedence over the pod.beta.kubernetes.io/hostname annotation value.

v1.2 introduces a beta feature where the user can specify a Pod annotation, pod.beta.kubernetes.io/subdomain, to specify the Pod's subdomain. The final domain will be "...svc.". For example, a Pod with the hostname annotation set to "foo", and the subdomain annotation set to "bar", in namespace "my-namespace", will have the FQDN "foo.bar.my-namespace.svc.cluster.local"

With v1.3, the PodSpec has a subdomain field, which can be used to specify the Pod's subdomain. This field value takes precedence over the pod.beta.kubernetes.io/subdomain annotation value.

https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/services-networking/dns-pod-service/

yosefrow
  • 2,128
  • 20
  • 29