1

Pod lifecycle is managed by Kubelet in data plane.

As per the definition: If the liveness probe fails, the kubelet kills the container

Pod is just a container with dedicated network namespace & IPC namespace with a sandbox container.


Say, if the Pod is single app container Pod, then upon liveness failure:

  • Does kubelet kill the Pod?

or

  • Does kubelet kill the container (only) within the Pod?
Wytrzymały Wiktor
  • 11,492
  • 5
  • 29
  • 37
overexchange
  • 15,768
  • 30
  • 152
  • 347
  • 1
    Since ["*Pods are the smallest deployable units of computing that you can create and manage in Kubernetes*"](https://kubernetes.io/docs/concepts/workloads/pods/), I would expect that the **pod** gets destroyed and recreated. – Turing85 Oct 03 '21 at 14:02
  • @Turing85 So, if a Pod has two app containers, then liveness failure of one container will affect the other container, Is that correct? – overexchange Oct 03 '21 at 14:48
  • 1
    I would assume so, yes. But - as I have written - this is an expectation of mine, not hard knowledge. – Turing85 Oct 03 '21 at 14:50
  • @Turing85 So, `livenessProbe` & `readinessProbe` is written at Pod level, but not container level? – overexchange Oct 03 '21 at 16:02
  • 1
    No. [Probes are defined on container-level](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/). The evaluation/response, however, seems to be on pod-level. The example in above documentation shows that the **pod** has been restarted as response of the **container probe** failing. – Turing85 Oct 03 '21 at 16:28

2 Answers2

1

A pod is indeed the smallest element in Kubernetes, but that does not mean it is in fact "empty" without a container.

In order to spawn a pod and therefore the container elements needed to attach further containers a very small container is created using the pause image. This is used to allocate an IP that is then used for the pod. Afterward the init-containers or runtime container declared for the pod are started.

If the lifeness probe fails, the container is restarted. The pod survives this. This is even important: You might want to get the logs of the crashed/restarted container afterwards. This would not be possible, if the pod was destroyed and recreated.

Thomas
  • 11,272
  • 2
  • 24
  • 40
1

The kubelet uses liveness probes to know when to restart a container (NOT the entire Pod). If the liveness probe fails, the kubelet kills the container, and then the container may be restarted, however it depends on its restart policy.


I've created a simple example to demonstrate how it works.

First, I've created an app-1 Pod with two containers (web and db). The web container has a liveness probe configured, which always fails because the /healthz path is not configured.

$ cat app-1.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: app-1
  name: app-1
spec:
  containers:
  - image: nginx
    name: web
    livenessProbe:
      httpGet:
        path: /healthz
        port: 8080
        httpHeaders:
        - name: Custom-Header
          value: Awesome
  - image: postgres
    name: db
    env:
    - name: POSTGRES_PASSWORD
      value: example

After applying the above manifest and waiting some time, we can describe the app-1 Pod to check that only the web container has been restarted and the db container is running without interruption:
NOTE: I only provided important information from the kubectl describe pod app-1 command, not the entire output.

$ kubectl apply -f app-1.yml
pod/app-1 created
    
$ kubectl describe pod app-1
    
Name:         app-1
...
Containers:
  web:
...
    Restart Count:  4   <--- Note that the "web" container was restarted 4 times
    Liveness:       http-get http://:8080/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
...
  db:
...
    Restart Count:  0   <--- Note that the "db" container works fine
...
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
...
  Normal   Killing    78s (x2 over 108s)   kubelet            Container web failed liveness probe, will be restarted
...

We can connect to the db container to see if it is running:
NOTE: We can use the db container even when restarting the web container.

$ kubectl exec -it app-1 -c db -- bash
root@app-1:/#

In contrast, after connecting to the web container, we can observe that the liveness probe restarts this container:

$ kubectl exec -it app-1 -c web -- bash
root@app-1:/# command terminated with exit code 137
matt_j
  • 4,010
  • 1
  • 9
  • 23