1

I am testing lifecycle hooks, and post-start works pretty well, but I think pre-stop never gets executed. There is another answer, but it is not working, and actually if it would work, it would contradict k8s documentation. So, from the docs:

PreStop

This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others. A call to the preStop hook fails if the container is already in terminated or completed state.

So, the API request makes me think I can simply do kubectl delete pod POD, and I am good.

More from the docs (pod shutdown process):

1.- User sends command to delete Pod, with default grace period (30s)

2.- The Pod in the API server is updated with the time beyond which the Pod is considered “dead” along with the grace period.

3.- Pod shows up as “Terminating” when listed in client commands

4.- (simultaneous with 3) When the Kubelet sees that a Pod has been marked as terminating because the time in 2 has been set, it begins the pod shutdown process.

4.1.- If one of the Pod’s containers has defined a preStop hook, it is invoked inside of the container. If the preStop hook is still running after the grace period expires, step 2 is then invoked with a small (2 second) extended grace period.

4.2.- The container is sent the TERM signal. Note that not all containers in the Pod will receive the TERM signal at the same time and may each require a preStop hook if the order in which they shut down matters.

...

So, since when you do kubectl delete pod POD, the pod gets on Terminating, I assume I can do it.

From the other answer, I can't do this, but the way is to do a rolling-update. Well, I tried in all possible ways and it didn't work either.

My tests:

I have a deployment:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-deploy
spec:
  replicas: 1
  template:
    metadata:
      name: lifecycle-demo
      labels:
        lifecycle: demo
    spec:
      containers:
      - name: nginx
        image: nginx
        lifecycle:
          postStart:
            exec:
              command:
              - /bin/sh 
              - -c
              - echo "Hello at" `date` > /usr/share/post-start
          preStop:
            exec:
              command:
              - /bin/sh"
              - -c
              - echo "Goodbye at" `date` > /usr/share/pre-stop
        volumeMounts:
        - name: hooks
          mountPath: /usr/share/
      volumes:
      - name: hooks
        hostPath:
          path: /usr/hooks/

I expect the pre-stop and post-start files to be created in /usr/hooks/, on the host (node where the pod is running). post-start is there, but pre-stop, never.

  • I tried kubectl delete pod POD, and it didn't work.
  • I tried kubectl replace -f deploy.yaml, with a different image, and when I do kubectl get rs, I can see the new replicaSet created, but the file isn't there.
  • I tried kubectl set image ..., and again, I can see the new replicaSet created, but the file isn't there.
  • I even tried putting them in a completely separated volumes, as I thought may be when I kill the pod and it gets re-created it re-creates the folder where the files should be created, so it deletes the folder and the pre-stop file, but that was not the case. Note: It always get re-created on the same node. I made sure on that.

What I have not tried is to bomb the container and break it by setting low CPU limit, but that's not what I need.

Any idea what are the circumstances under which preStop hook would get triggered?

Community
  • 1
  • 1
suren
  • 7,817
  • 1
  • 30
  • 51
  • 4
    There is a typo in the second "/bin/sh"; for preStop. There is an extra double quote ("). It was letting me to create the deployment, but was the cause it was not creating the file. All works fine now. – suren Mar 21 '19 at 19:33

3 Answers3

3

Posting this as community wiki for a better visibility.

There is a typo in the second "/bin/sh"; for preStop. There is an extra double quote ("). It was letting me to create the deployment, but was the cause it was not creating the file. All works fine now.

The exact point where the issue lied was here:

          preStop:
            exec:
              command:
              - /bin/sh" # <- this quotation
              - -c
              - echo "Goodbye at" `date` > /usr/share/pre-stop

To be correct it should look like that:

          preStop:
            exec:
              command:
              - /bin/sh
              - -c
              - echo "Goodbye at" `date` > /usr/share/pre-stop

For the time of writing this community wiki post, this Deployment manifest is outdated. Following changes were needed to be able to run this manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: good-deployment
spec:
  selector:
    matchLabels:
      lifecycle: demo 
  replicas: 1
  template:
    metadata:
      labels:
        lifecycle: demo
    spec:
      containers:
      - name: nginx
        image: nginx
        lifecycle:
          postStart:
            exec:
              command:
              - /bin/sh 
              - -c
              - echo "Hello at" `date` > /usr/share/post-start
          preStop:
            exec:
              command:
              - /bin/sh
              - -c
              - echo "Goodbye at" `date` > /usr/share/pre-stop
        volumeMounts:
        - name: hooks
          mountPath: /usr/share/
      volumes:
      - name: hooks
        hostPath:
          path: /usr/hooks/

Changes were following:

1. apiVersion

+--------------------------------+---------------------+
|               Old              |          New        |
+--------------------------------+---------------------+
| apiVersion: extensions/v1beta1 | apiVersion: apps/v1 |
+--------------------------------+---------------------+

StackOverflow answer for more reference:

2. selector

Added selector section under spec:

spec:
  selector:
    matchLabels:
      lifecycle: demo 

Additional links with reference:

Dawid Kruk
  • 8,982
  • 2
  • 22
  • 45
2

I know its too late to answer, but it is worth to add here. I spend a full day to figureout this preStop in K8S.

K8S does not print any logs in PreStop stage. PreStop is part of lifecycle, also called as hook.

Generally Hook and Probs(Liveness & Readiness) logs will not print in kubectl logs.

Read this issue, you will get to know fully.

But there is indirect way to print logs in kubectl logs cmd. Follow the last comment in the above link

Adding here also.

lifecycle:
          postStart:
            exec:
              command:
              - /bin/sh
              - -c
              - sleep 10; echo 'hello from postStart hook' >> /proc/1/fd/1
Prakash26790
  • 727
  • 9
  • 29
1

Posting this as community wiki for a better visibility.

When a pod should be terminated:

  • SIGTERM signal is sent to the main process (PID 1) in each container, and a “grace period” countdown starts (defaults to 30 seconds for k8s pod - see below to change it).

  • Upon the receival of the SIGTERM, each container should start a graceful shutdown of the running application and exit.

If a container doesn’t terminate within the grace period, a SIGKILL signal will be sent and the container violently terminated.

For a detailed explanation, please see:

Kubernetes: Termination of pods

Kubernetes: Pods lifecycle hooks and termination notice

Kubernetes: Container lifecycle hooks

Always Confirm this:

  • check whether preStop is taking more than 30 seconds to run (more than default graceful period time). If it is taking then increase the terminationGracePeriodSeconds to more than 30 seconds, may be 60. refer this for more info about terminationGracePeriodSeconds