4

I have a program which has multiple independent1 components.

It is trivial to add a liveness probe in all of the components, however it's not easy to have a single liveness probe which would determine the health of all of the program's components.

How can I make kubernetes look at multiple liveness probes and restart the container when any of those are defunct?

I know it can be achieved by adding more software, for example an additional bash script which does the liveness checks, but I am looking for a native way to do this.


1By independent I mean that failure of one component does not make the other components fail.

styrofoam fly
  • 578
  • 2
  • 9
  • 25
  • Shouldnt they be running in different Pods or different containers within a Pod at the very least? – PhilipGough Mar 08 '18 at 12:40
  • @user2983542 they should, however not all applications are well-designed. – styrofoam fly Mar 08 '18 at 12:49
  • 1
    Right, I don't think this is possible, although I stand to be corrected. How about creating a sidecar container with some logic and set the probe to that? – PhilipGough Mar 08 '18 at 12:52
  • So if 1 of 3 of your components fail liveness probe, the pod should be 33% unavailable? Trying to express what you want to achieve would already give you a clue. – 9ilsdx 9rvj 0lo Jul 09 '19 at 13:55

3 Answers3

5

The Kubernetes API allows one liveness and one readness per application (Deployment / POD). I recommend creating a validations centralizing service that has an endpoint rest:

livenessProbe:
  httpGet:
    path: /monitoring/alive
    port: 3401
    httpHeaders:
    - name: X-Custom-Header
      value: Awesome
  initialDelaySeconds: 15
  timeoutSeconds: 1
  periodSeconds: 15

or try one bash to same task, like:

livenessProbe:
  exec:
    command:
      - bin/bash
      - -c
      - ./liveness.sh
  initialDelaySeconds: 220
  timeoutSeconds: 5

liveness.sh

#!/bin/sh
if [ $(ps -ef | grep java | wc -l) -ge 1 ]; then
  echo 0
else
  echo "Nothing happens!" 1>&2
    exit 1
fi

Recalling what the handling of messages can see in the events the failure in question: "Warning Unhealthy Pod Liveness probe failed: Nothing happens!"

Hope this helps

paulogrell
  • 136
  • 1
  • 6
  • More exactly, there is one probe per container (and there can be several containers per pod/deployment/daemonset). The container is restarted when it's liveness probe fails. -- Thank's for the hint about the message! I did not understans this un the doc. – simohe Apr 28 '20 at 11:14
0

It doesn't do that. The model is pretty simple, one probe per container, follow the restart policy on failure.

Understood about the designed-for-containers problem with legacy apps, but there really are a lot of ways to arrange for resources to be shared for legacy compatibility. If the components of this system are already different processes, then there should be a way to partition them into containers.

If the components are threads or some other intra-application modularization technique, then the liveness determination really has to come from inside the app.

Jonah Benton
  • 3,598
  • 1
  • 16
  • 27
0

For this case create custom script and handle multiple cases as per requirement and give single output for container as readiness response. Which will be considered to take final decision in the life cycle management of kubernetes