1

On my Kubernetes Setup, i have 2 Services - A and B.
Service B is dependent on Service A being fully started through. I would now like to set a TCP Readiness-Probe in Pods of Service B, so they test if any Pod of Service A is fully operating.

the ReadinessProbe section of the deployment in Service B looks like:

readinessProbe:
  tcpSocket:
    host: serviceA.mynamespace.svc.cluster.local
    port: 1101 # same port of Service A Readiness Check

I can apply these changes, but the Readiness Probe fails with:

Readiness probe failed: dial tcp: lookup serviceB.mynamespace.svc.cluster.local: no such host

I use the same hostname on other places (e.g. i pass it as ENV to the container) and it works and gets resolved.

Does anyone have an idea to get the readiness working for another service or to do some other kind of dependency-checking between services? Thanks :)

tom-tr
  • 13
  • 3
  • i think this maybe the expected bahavior because the app is not yet ready on the other side , try putting intialdelayseconds=60 – Ijaz Ahmad Jul 11 '19 at 10:20
  • Checkout pod ready ++ which would be useful for your usecase https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/0007-pod-ready%2B%2B.md – Suresh Vishnoi Jul 11 '19 at 10:30
  • 1
    Say you start both B without starting service A. What happens? (If it never passes its local health checks, and Kubernetes will automatically restart it, is that bad?) – David Maze Jul 11 '19 at 10:34
  • @DavidMaze: i did not quite get the question. Currently, both Services pass their Health Checks. However i have to make sure that Service A is up and running before Service B runs. If i dont do that, i have to kill the Pod from Service B's Deployment and let it create a new Pod. Then it works fine. But that is manual interaction.. and consequently bad for operation. – tom-tr Jul 11 '19 at 11:20

2 Answers2

4

Due to the fact that Readiness and Liveness probes are fully managed by kubelet node agent and kubelet inherits DNS discovery service from the particular Node configuration, you are not able to resolve K8s internal nameserver DNS records:

For a probe, the kubelet makes the probe connection at the node, not in the pod, which means that you can not use a service name in the host parameter since the kubelet is unable to resolve it.

You can consider scenario when your source Pod A consumes Node IP Address by propagating hostNetwork: true parameter, thus kubelet can reach and success Readiness probe from within Pod B, as described in the official k8s documentation:

tcpSocket:
  host: Node Hostname or IP address where Pod A residing
  port: 1101

However, I've found Stack thread, where you can get more efficient solution how to achieve the same result through Init Containers.

Nick_Kh
  • 5,089
  • 2
  • 10
  • 16
0

In addition to Nick_Kh's answer, another workaround is to use probe by command, which is executed in a container.

To perform a probe, the kubelet executes the command cat /tmp/healthy in the target container. If the command succeeds, it returns 0, and the kubelet considers the container to be alive and healthy.

An example:

readinessProbe:
  exec:
    command:
     - sh
     - -c
     - wget -T2 -O- http://service
Meng-Yuan Huang
  • 1,943
  • 3
  • 19
  • 23
  • Just Curious, if I install 'external-dns' and if it's configured to update the public dns servers (let's say aws route 53); Will kubelet then be able to perform readiness probes targeting in-cluster services ? – Mandar K Aug 05 '23 at 12:10