My Pods getting SIGTERM automatically for unknown reason. Unable to find root cause of why SIGTERM sent from kubelet to my pods is the concern to fix issue. When I ran kubectl describe podname -n namespace, under events section Only killing event is present. I didn't see any unhealthy status before kill event. Is there any way to debug further with events of pods or any specific log files where we can find trace of reason for sending SIGTERM? I tried to do kubectl describe on events(killing)but it seems no such command to drill down events further. Any other approach to debug this issue is appreciated.Thanks in advance! kubectl desribe pods snippet
2 Answers
Please can you share the yaml of your deployment so we can try to replicate your problem.
Based on your attached screenshot, it looks like your readiness probe failed to complete repeatedly (it didn't run and fail, it failed to complete entirely), and therefore the cluster killed it.
Without knowing what your docker image is doing makes it hard to debug from here.
As a first point of debugging, you can try doing kubectl logs -f -n {namespace} {pod-name}
to see what the pod is doing and seeing if it's erroring there.
The error Client.Timeout exceeded while waiting for headers
implies your container is proxying something? So perhaps what you're trying to proxy upstream isn't responding.

- 4,442
- 2
- 17
- 30
pod getting SIGTERM means that probes are failing and k8s scheduler has given signal to pod to die so things you should check
what are the probes readiness & liveliness. Are the probes[shell/http] configured, correctly reflecting the status what get the probes fixed
Is the timing configured for readiness & liveliness enough/correct ?
many times we forget to count the time required for the pod to get ready and liveliness probe gets triggered and will get the pod killed here
initialDelaySeconds
for liveliness probe will help

- 1,016
- 11
- 9