0

I am deploying new deployment after changes in kubernetes service. But I am facing strange issue. When I delete deployment, it deleted fine but its replica sets and pods not deleted. Therefor after apply that deployment again, new replica sets and pods created. But that newly created pods throws "FailedScheduling" error with message "0/1 nodes are available: 1 Too many pods.". And that's why new changes are not reflecting

Following are the commands which I am using

kubectl delete -f render.yaml
kubectl apply -f render.yaml

My yaml file code

apiVersion: apps/v1
kind: Deployment
metadata:
  name: renderdev-deployment
  labels:
    app: renderdev-deployment
spec:
  replicas: 6
  selector:
    matchLabels:
      app: renderdev-deployment
  template:
    metadata:
      labels:
        app: renderdev-deployment
    spec:
      containers:
      - name: renderdev-deployment
        image: renderdev.azurecr.io/renderdev:latest
        ports:
        - containerPort: 5000
        
        volumeMounts:
            - name: azuresquarevfiles
              mountPath: /mnt/azuresquarevfiles
      volumes:
      - name: azuresquarevfiles
        azureFile:
          secretName: azure-secret
          shareName: videos
          readOnly: false    

So when I delete deployment first it should delete replica sets/pods respectively but it does not. What will be the issue? Do I have to delete that replica sets and pods manually?

Hunzla Ali
  • 383
  • 2
  • 5
  • 22
  • why don't you use kubectl delete -f sherparender.yaml to delete deployments. – Vish Sep 13 '21 at 08:23
  • sorry for that. I have updated question. – Hunzla Ali Sep 13 '21 at 09:08
  • @HunzlaSheikh When you run `kubectl delete -f render.yaml` and you see that `replicaset` is not deleted - try to `kubectl get replicaset xxxxxxx -o json` and same for at least 1 pod in this `replicaset`. Check if there are any `finalizers` which block delition. – moonkotte Sep 13 '21 at 15:03

0 Answers0