We have some Openshift Deployments. We no do deploy or manual changes. However, We got an alert at night. After our query, see these Deployments create new ReplicaSet and it's yaml the same with old ReplicaSet, and evnet log display , scale up new pod, and then scale down in one minute. But the latest revision is still in old ReplicaSet.
But there was another problem a while ago, insufficient disk space caused some pods evicted. Not sure if it is related.
Could somebody help me with this issue?
message:
oc get rs -o wide -n test | grep example
NAME DESIRED CURRENT READY AGE
example-6d4f99bc54 0 0 0 18h
example-759469945f 1 1 1 70d
example-ff8f986960 0 0 0 110d
old ReplicaSet
kind: ReplicaSet
apiVersion: apps/v1
metadata:
annotations:
deployment.kubernetes.io/desired-replicas: '2'
deployment.kubernetes.io/max-replicas: '3'
deployment.kubernetes.io/revision: '24'
deployment.kubernetes.io/revision-history: '12,14,16,18,20,22
new ReplicaSet
kind: ReplicaSet
apiVersion: apps/v1
metadata:
annotations:
deployment.kubernetes.io/desired-replicas: '2'
deployment.kubernetes.io/max-replicas: '3'
deployment.kubernetes.io/revision: '23'
deployment.kubernetes.io/revision-history: '13,15,17,19,21'