0

I have installed Prometheus using helm chart, so I got 4 deployment files listed:

  • prometheus-alertmanager
  • prometheus-server
  • prometheus-pushgateway
  • prometheus-kube-state-metrics

All pods of deployment files are running accordingly. By mistake I restarted one deployment file using this command:

kubectl rollout restart deployment prometheus-alertmanager

Now a new pod is getting created and getting crashed, if I delete deployment file then previous pod also be deleted. So what can I do for that crashLoopBackOff pod?

Screenshot of kubectl output

Jonas
  • 121,568
  • 97
  • 310
  • 388
Anuj Patel
  • 1
  • 1
  • 1

2 Answers2

0

You can simply delete that pod with the kubectl delete pod <pod_name> command or attempt to delete all pod in crashLoopBackOff status with:

kubectl delete pod `kubectl get pods | awk '$3 == "CrashLoopBackOff" {print $1}'`

Make sure that the corresponding deployment is set to 1 replica (or any other chosen number). If you delete a pod(s) of that deployment, it will create a new one while keeping the desired replica count.

Wytrzymały Wiktor
  • 11,492
  • 5
  • 29
  • 37
0

These two pods (one running and the other crashloopbackoff) belong to different deployments, as they're suffixed by different tags, i.e: pod1-abc-123 and pod2-abc-456 belong to the same deployment template, however pod1-abc-123 and pod2-def-566 belong to different deployments.

A deployment is going to create a replicaset, make sure you delete that corresponding old replicase, kubectl get rs | grep 99dd and delete that one, similar to the prometheus server one.

islamhamdi
  • 207
  • 2
  • 8