55

I am starting exploring runnign docker containers with Kubernetes. I did the following

  1. Docker run etcd
  2. docker run master
  3. docker run service proxy
  4. kubectl run web --image=nginx

To cleanup the state, I first stopped all the containers and cleared the downloaded images. However I still see pods running.

$ kubectl get pods 
NAME                   READY     STATUS    RESTARTS   AGE
web-3476088249-w66jr   1/1       Running   0          16m

How can I remove this?

Pankaj Garg
  • 893
  • 2
  • 8
  • 15
  • This is already answered [here](https://stackoverflow.com/questions/36138636/how-do-i-delete-orphan-kubernetes-pods?rq=1)! – Pankaj Garg Apr 14 '17 at 11:20
  • 2
    Possible duplicate of [How to stop Replicaset from restarting?](http://stackoverflow.com/questions/43230541/how-to-stop-replicaset-from-restarting) – Oswin Noetzelmann Apr 14 '17 at 20:11
  • 1
    Possible duplicate of [How Do I Delete Orphan Kubernetes Pods](https://stackoverflow.com/questions/36138636/how-do-i-delete-orphan-kubernetes-pods) – Jonathan Hall Apr 04 '18 at 18:23

3 Answers3

80

To delete the pod:

kubectl delete pods web-3476088249-w66jr

If this pod is started via some replicaSet or deployment or anything that is creating replicas then find that and delete that first.

kubectl get all

This will list all the resources that have been created in your k8s cluster. To get information with respect to resources created in your namespace kubectl get all --namespace=<your_namespace>

To get info about the resource that is controlling this pod, you can do

kubectl describe  web-3476088249-w66jr

There will be a field "Controlled By", or some owner field using which you can identify which resource created it.

Sushil Kumar Sah
  • 1,042
  • 10
  • 13
  • 2
    @Madhusoodan k8s recreates the pod only if it is being controlled by some controller like replica set, deployment, etc. You should describe the pod to find out who is controlling that pod and delete that controller first and then delete the pod. This way pod will be killed permanently. – Sushil Kumar Sah May 05 '19 at 02:50
  • 1
    had to add -n to the end to get this to work, without it it didn't work for me because my pod was not in the default namespace (no error message either) – Hansang Nov 20 '20 at 01:50
  • 3
    shouldn't be `kubectl describe pod web-3476088249-w66jr` instead? – ikhvjs Aug 25 '21 at 12:00
20

When you do kubectl run ..., that's a deployment you create, not a pod directly. You can check this with kubectl get deploy. If you want to delete the pod, you need to delete the deployment with kubectl delete deploy DEPLOYMENT.

I would recommend you to create a namespace for testing when doing this kind of things. You just do kubectl create ns test, then you do all your tests in this namespace (by adding -n test). Once you have finished, you just do kubectl delete ns test, and you are done.

suren
  • 7,817
  • 1
  • 30
  • 51
12

If you defined your object as Pod then

kubectl delete pod <--all | pod name> 

will remove all of the generated Pod. But, If wrapped your Pod to Deployment object then running the command above only will trigger a re-creation of them.

In that case, you need to run

kubectl delete deployment <--all | deployment name> 

That will also remove the Service object that is related to the deleted Deployment

CeamKrier
  • 629
  • 7
  • 15