When trying to run a knative service in my GKE. Pods giving me CrashLoopBackOff error. what can be done to resolve this?
-
Try describing the pod using kubectl describe po podName -n namespace. The Events in the end might help you understand the reason. Possible reason could be unable to pull docker image successfully. If possible provide more details like results of above command for help. – Tarun Khosla Aug 12 '19 at 14:56
2 Answers
CrashLoopBackOff is a Kubernetes pod state. It means your pod constantly failing and restarting, at some point Kubernetes slows down the restart rate of the pod to save resources in cluster.
There are several ways to debug this error:
This one gets all the information about pod state, and have status parts carefully looking here is essential.
kubectl get pod $podname -o yaml
This one shows what happened to pod with timeline and some additional info
kubectl describe pod $podname
This one shows logs but the previous pod so it is complete log start to end. Without previous part it shows current pod and that may not show all logs.
kubectl logs $podname --previous
Last one is not command but way to approach, if you really want to dig in the container and commands above didn't help, add sidecar to pod and check filesystem for erorrs or simply set .spec.restart to Never and exec to it.

- 1,001
- 6
- 14
-
1Although that answer is pretty exhaustive, I feel it is missing the ultimate debug tool: `kubectl debug`. Use `kubectl debug pod-name` to start a new Pod, as a copy of your faulty deployment. Instead of using whichever entrypoint was configured, it would just start a shell and get you in. You may then copy/paste the entry point command to figure out what went wrong. – SYN Nov 12 '19 at 13:12
The above answer is correct along the following steps according to CrashLoopBackOff.
- Check "Exit Code" of the crashed container to get to the root cause of the issue.
From the describe pod command’s output, as mentioned above, in the
containers: [CONTAINER_NAME]: last state: exit code
field.
- If the exit code is 1, the container crashed because the application crashed.
If the exit code is 0, verify for how long your app was running. Containers exit when your application's main process exits. If your app finishes execution very quickly, container might continue to restart.
- Connect to a running container Run this command in the Pod shell
kubectl exec -it [POD_NAME] -- /bin/bash
If there is more than one container in your Pod, add -c [CONTAINER_NAME]
.
You can now use this container to testing by running bash commands from it.
Here's the link for all Troubleshooting issues with Kubernetes Engine.

- 1,068
- 1
- 6
- 11