0

I'm trying to create an ephemeral debug container using kubectl debug as follows:

kubectl debug $POD_NAME -it --share-processes --container=myapp-web --copy-to=$USER-debug -- /bin/bash

My app is a Django app -- the goal is to be able to shell in and run manage.py commands without worrying about the pod shutting down.

When I run this kubectl debug command, it works for about 30 seconds, then the session ends. I think this is because the debug pod has the same liveness and readiness probes as the production pod, but since my app isn't running, these fail. In the k8s events, I can see that these probes are failing for the debug pod.

In my shell, I see the following after 30 seconds or so:

bash-5.1$ Session ended, resume using 'kubectl attach tao-debug -c myapp-web -i -t' command when the pod is running

Does it seem right that the session is getting killed because of a failing liveness probe? And if so, is there a way to copy a pod with kubectl debug, but ignoring the liveness probe?

tao_oat
  • 123
  • 1
  • 7
  • Can you refer to this [Lin1](https://help.hcltechsw.com/onedb/helm/charts/0.4.27/c_charts_troubleshooting_liveness_probe.html) and [link2](https://jfrog.com/help/r/how-to-disable-liveness-and-readiness-probe-for-pods/how-to-disable-liveness-and-readiness-probe-for-pods-video) might help you in giving some insights in disabling the liveness probe. – Hemanth Kumar Aug 11 '23 at 11:50
  • Please let me know whether the shared info was helpful. I am happy to assist if you have any further queries. – Hemanth Kumar Aug 14 '23 at 03:47

0 Answers0