I am deploying new deployment after changes in kubernetes service. But I am facing strange issue. When I delete deployment, it deleted fine but its replica sets and pods not deleted. Therefor after apply that deployment again, new replica sets and pods created. But that newly created pods throws "FailedScheduling" error with message "0/1 nodes are available: 1 Too many pods.". And that's why new changes are not reflecting
Following are the commands which I am using
kubectl delete -f render.yaml
kubectl apply -f render.yaml
My yaml file code
apiVersion: apps/v1
kind: Deployment
metadata:
name: renderdev-deployment
labels:
app: renderdev-deployment
spec:
replicas: 6
selector:
matchLabels:
app: renderdev-deployment
template:
metadata:
labels:
app: renderdev-deployment
spec:
containers:
- name: renderdev-deployment
image: renderdev.azurecr.io/renderdev:latest
ports:
- containerPort: 5000
volumeMounts:
- name: azuresquarevfiles
mountPath: /mnt/azuresquarevfiles
volumes:
- name: azuresquarevfiles
azureFile:
secretName: azure-secret
shareName: videos
readOnly: false
So when I delete deployment first it should delete replica sets/pods respectively but it does not. What will be the issue? Do I have to delete that replica sets and pods manually?