164

I just saw some of my pods got evicted by kubernetes. What will happen to them? just hanging around like that or I have to delete them manually?

reachlin
  • 4,516
  • 7
  • 18
  • 23
  • 4
    Witnessing the same behavior, I have a pod that has been in `Evicted` state for 13 days now. Looks like evicted pods don't get removed (or maybe it is just a bug). – Elouan Keryell-Even Oct 23 '17 at 12:30
  • podgc controller will reclaim those Failed/Succeeded pods when a configurable threshold reached. – zhb Aug 07 '19 at 21:09
  • 2
    My Pods are evicted and there is a total of 40. So will I be charged per month for those evicted pods too? – Anant Sep 24 '19 at 02:42
  • Bunch of containers are evicted but I Still have 2 containers running as expected. Failed ones were because of low resource(`DiskPressure`) which can be found using `kubectl describe pods my-pod-name --namespace prod` – prayagupa Apr 24 '20 at 21:49

17 Answers17

131

A quick workaround I use, is to delete all evicted pods manually after an incident. You can use this command:

kubectl get pods --all-namespaces -o json | jq '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | "kubectl delete pods \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -c
Lenni
  • 3,480
  • 1
  • 9
  • 7
Kalvin
  • 1,384
  • 1
  • 9
  • 4
  • 3
    Checkout this one too https://gist.github.com/psxvoid/71492191b7cb06260036c90ab30cc9a0 – Pavel Sapehin Aug 22 '18 at 08:08
  • You must have a typo, `-a` argument is invalid. – Ilya Suzdalnitski Mar 30 '19 at 04:44
  • 45
    This (and similar answers) do not answer the OP question "What will happen to them [if you don't do anything]?" – Oliver Mar 05 '20 at 19:38
  • To run this on a schedule, you can set up a k8s cronjob, just follow the easy doc here: https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/ – mkumar118 Jan 25 '21 at 13:24
  • 5
    Alternative more readable option to delete within a single namespace: `kubectl get pod -n mynamespace | grep Evicted | awk '{print $1}' | xargs kubectl delete pod -n mynamespace` – Chris Halcrow Feb 24 '21 at 05:01
113

To delete pods in Failed state in namespace default

kubectl -n default delete pods --field-selector=status.phase=Failed
ticapix
  • 1,534
  • 1
  • 11
  • 15
36

Evicted pods should be manually deleted. You can use following command to delete all pods in Error state.

kubectl get pods --all-namespaces --field-selector 'status.phase==Failed' -o json | kubectl delete -f -
Hansika Weerasena
  • 3,046
  • 1
  • 13
  • 22
32

Depending on if a soft or hard eviction threshold that has been met, the Containers in the Pod will be terminated with or without grace period, the PodPhase will be marked as Failed and the Pod deleted. If your Application runs as part of e.g. a Deployment, there will be another Pod created and scheduled by Kubernetes - probably on another Node not exceeding its eviction thresholds.

Be aware that eviction does not necessarily have to be caused by thresholds but can also be invoked via kubectl drain to empty a node or manually via the Kubernetes API.

Simon Tesar
  • 1,683
  • 13
  • 15
31

To answer the original question: the evicted pods will hang around until the number of them reaches the terminated-pod-gc-threshold limit (it's an option of kube-controller-manager and is equal to 12500 by default), it's by design behavior of Kubernetes (also the same approach is used and documented for Jobs - https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup). Keeping the evicted pods pods around allows you to view the logs of those pods to check for errors, warnings, or other diagnostic output.

victorm1710
  • 1,263
  • 14
  • 11
15

The bellow command delete all failed pods from all namespaces

kubectl get pods -A | grep Evicted | awk '{print $2 " -n " $1}' | xargs -n 3 kubectl delete pod
Marcelo Aguiar
  • 161
  • 1
  • 4
13

One more bash command to delete evicted pods

kubectl get pods | grep Evicted | awk '{print $1}' | xargs kubectl delete pod
Roman Marusyk
  • 23,328
  • 24
  • 73
  • 116
11

Just in the case someone wants to automatically delete all evicted pods for all namespaces:

  • Powershell
    Foreach( $x in (kubectl get po --all-namespaces --field-selector=status.phase=Failed --no-headers -o custom-columns=:metadata.name)) {kubectl delete po $x --all-namespaces }
  • Bash
kubectl get po --all-namespaces --field-selector=status.phase=Failed --no-headers -o custom-columns=:metadata.name | xargs kubectl delete po --all-namespaces
LucasPC
  • 623
  • 5
  • 11
  • in case it helps, you can set this to run on a schedule as a k8s cronjob, by following the easy doc here: kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs – mkumar118 Jan 25 '21 at 13:26
  • I love you for adding the Pwsh version of the command. Running Windows 10 and I dont have Bash. – Ayushmati Aug 15 '22 at 08:15
10

Kube-controller-manager exists by default with a working K8s installation. It appears that the default is a max of 12500 terminated pods before GC kicks in.

Directly from the K8s documentation: https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/#kube-controller-manager

--terminated-pod-gc-threshold int32     Default: 12500
Number of terminated pods that can exist before the terminated pod garbage collector starts deleting terminated pods. If <= 0, the terminated pod garbage collector is disabled.

xskxzr
  • 12,442
  • 12
  • 37
  • 77
Steveno
  • 171
  • 1
  • 7
  • I have the kube-controller-manager pods on my master nodes. But how should I modify this flag? If I want to use `kubectl edit pod kube-controller-manager- -n kube-system` it gives me `pod is invalid` error after saving the config file. – Ali Tou Aug 08 '19 at 19:27
  • for us, we cannot edit the config for kube-controller-manager as we are on AKS. so we set up a quick cronjob for cleanup: kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs – mkumar118 Jan 25 '21 at 13:25
7

In case you have pods with a Completed status that you want to keep around:

kubectl get pods --all-namespaces --field-selector 'status.phase==Failed' -o json | kubectl delete -f -
tomasbasham
  • 1,695
  • 2
  • 18
  • 37
mefix
  • 71
  • 1
  • 2
6

Another way still with awk.

To prevent any human error that could make me crazy (deleting desirable pods), I check before the result of the get pods command :

kubectl -n my-ns get pods --no-headers --field-selector=status.phase=Failed     

If that looks good, here we go :

kubectl -n my-ns get pods --no-headers --field-selector=status.phase=Failed | \
awk '{system("kubectl -n my-ns delete pods " $1)}'

Same thing with pods of all namespaces.

Check :

kubectl get -A pods --no-headers --field-selector=status.phase=Failed     

Delete :

kubectl get -A pods --no-headers --field-selector status.phase=Failed | \
awk '{system("kubectl -n " $1 " delete pod " $2 )}'
davidxxx
  • 125,838
  • 23
  • 214
  • 215
3

OpenShift equivalent of Kalvin's command to delete all 'Evicted' pods:

eval "$(oc get pods --all-namespaces -o json | jq -r '.items[] | select(.status.phase == "Failed" and .status.reason == "Evicted") | "oc delete pod --namespace " + .metadata.namespace + " " + .metadata.name')"
ffghfgh
  • 294
  • 3
  • 14
2

To delete all the Evicted pods by force, you can try this one-line command:

$ kubectl get pod -A | sed -nr '/Evicted/s/(^\S+)\s+(\S+).*/kubectl -n \1 delete pod \2 --force --grace-period=0/e'

Tips: use the p modifier of s command of sed instead of e will just print the real command to do the deletion job:

$ kubectl get pod -A | sed -nr '/Evicted/s/(^\S+)\s+(\S+).*/kubectl -n \1 delete pod \2 --force --grace-period=0/p'
Weike
  • 1,232
  • 12
  • 15
1

below command will get all evicted pods from the default namespace and delete them

kubectl get pods | grep Evicted | awk '{print$1}' | xargs -I {} kubectl delete pods/{}

user3009002
  • 103
  • 1
  • 5
bhavin
  • 122
  • 7
0

Here is the 'official' guide for how to hard code the threshold(if you do not want to see too many evicted pods): kube-controll-manager

But a known problem is how to have kube-controll-manager installed...

MandyShaw
  • 1,088
  • 3
  • 14
  • 22
tikael
  • 439
  • 4
  • 15
  • Please advise on how the mentioned installation may be achieved, if it is troublesome. – MandyShaw Jul 30 '18 at 18:58
  • I do not know the answer either that is why I mentioned it. And OP did not mention the system he using and I do not know if he would have the same issue. BTW, downvote is SUPER NICE. – tikael Jul 30 '18 at 19:09
  • You would I think have done better adding your idea as a comment since it doesn't fully answer the question (which is why I downvoted it - sorry but it happens to us all, including me just now). – MandyShaw Jul 30 '18 at 19:13
  • check all the other answers above, OP asked what happened and how many of them did answer that and how many of them provide a way to delete the eviction pod? – tikael Jul 30 '18 at 19:15
0

When we have too many evicted pods in our cluster, this can lead to network load as each pod, even though it is evicted is connected to the network and in case of a cloud Kubernetes cluster, will have blocked an IP address, which can lead to exhaustion of IP addresses too if you have a fixed pool of IP addresses for your cluster.

Also, when we have too many pods in Evicted status, it becomes difficult to monitor the pods by running the kubectl get pod command as you will see too many evicted pods, which can be a bit confusing at times.

To delete and evicted pod run the following command

kubectl delete pod <podname> -n <namespace>

what if you have many evicted pods

kubectl get pod -n <namespace> | grep Evicted | awk '{print $1}' | xargs kubectl delete pod -n <namespace>
0

I found this to be the fastest way to delete evicted pods

kubectl delete pod -A --field-selector 'status.phase==Failed'

(Only matters when you have A LOT of them accumulated)

Tobias Bergkvist
  • 1,751
  • 16
  • 20