The reason behind that is the container is started with the old flags, when you pass new flags and kube-controller-manager
pod is restarted(pod restart doesn't mean container restart
) but the kube-controller-manager container
is still using old flags.
Check it using following command:
docker ps --no-trunc | grep "kube-controller-manager --"
dcc828aa22aae3c6bb3c4ba31d0cfcac669b9c47e4cf50af580ebbb334bfea9f sha256:40c8d10b2d11cbc3db2e373a5ffce60dd22dbbf6236567f28ac6abb7efbfc8a9 "kube-controller-manager --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --pod-eviction-timeout=30s --leader-elect=true --use-service-account-credentials=true --controllers=*,bootstrapsigner,tokencleaner --root-ca-file=/etc/kubernetes/pki/ca.crt --address=127.0.0.1 --kubeconfig=/etc/kubernetes/controller-manager.conf --service-account-private-key-file=/etc/kubernetes/pki/sa.key --allocate-node-cidrs=true --cluster-cidr=192.168.13.0/24 --node-cidr-mask-size=24"
Once you update the flags in /etc/kubernetes/manifests/kube-controller-manager.yaml
file, restart the docker container of kube-controller-manager and changes will take effect. You can use following command to restart the kube-controller-manager container:
docker restart $(docker ps --no-trunc | grep "kube-controller-manager --" | awk '{print $1}')
Hope this helps.