14

Not sure how but I've got numerous pods running that seem to be due to multiple repliacasets for each deployment.

This occurred after I did some heavy editing of multiple deployments.

Is there some easy way of deleting orphaned replica sets? As opposed to manually inspecting each, and determining if it matches with a deployment, and then delete it?

Chris Stryczynski
  • 30,145
  • 48
  • 175
  • 286

4 Answers4

13

revisionHistoryLimit is an optional field that specifies the number of old ReplicaSets to retain to allow rollback

By default, 10 old ReplicaSets will be kept, change it to one so you dont have more than one old replicaset.

Offical Link

Tested the field as below

Created NGINX deployment updated multiple times and generate few replicaset as listed below

$ kubectl get all
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-854998f596-6jtth   1/1     Running   0          18s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   9d

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           6m20s

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-59d5958c9f   0         0         0       4m5s
replicaset.apps/nginx-669cf47c4f   0         0         0       94s
replicaset.apps/nginx-6ff549666b   0         0         0       2m21s
replicaset.apps/nginx-854998f596   1         1         1       2m7s
replicaset.apps/nginx-966c7f84     0         0         0       108s

Edit the running deployment and update revisionHistoryLimit field and set to zero as revisionHistoryLimit: 0

$ kubectl edit deployments.apps nginx
deployment.apps/nginx edited

Old Replica set are removed.

$ kubectl get all
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-854998f596-6jtth   1/1     Running   0          52s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   9d

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           6m54s

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-854998f596   1         1         1       2m41s
DT.
  • 3,351
  • 2
  • 18
  • 32
6

A possible to manually remove old replicasets in a Kubernetes cluster is by running this command:

kubectl delete replicaset $(kubectl get replicaset -o jsonpath='{ .items[?(@.spec.replicas==0)].metadata.name }')

For further details, see also this thread, from which this answer is taken from: https://stackoverflow.com/a/68274835/5538923 .

marcor92
  • 417
  • 5
  • 13
1

you can edit your deployment and limit the history to 2 or 0

kubectl -n master edit deploy you_deployment

spec:
   replicas: 2
   revisionHistoryLimit: 2
0

Not sure how but I've got numerous pods running that seem to be due to multiple repliacasets for each deployment.

This is due to the revision history; the limit is set to 10 by default. This is a safety measure to allow rollbacks to an old revision of a deployment.

Docs: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#revision-history-limit

If you want to clean up the old replicasets, you can delete them manually, or set a different limit.

Quoted from the docs:

More specifically, setting this field to zero means that all old ReplicaSets with 0 replicas will be cleaned up. In this case, a new Deployment rollout cannot be undone, since its revision history is cleaned up.

If you don't particularly like to have to interactively edit files, you can use kubectl patch to update the revision history limit, e.g.:

# change me
NAMESPACE=default
DEPLOYMENT=nginx

kubectl \
   -n ${NAMESPACE} patch deployment.apps/${DEPLOYMENT} --type=json \
   -p '[{"op":"replace","path":"/spec/revisionHistoryLimit","value":0}]'
dnozay
  • 23,846
  • 6
  • 82
  • 104