1

I am trying to add some extra flags to my kubernetes controller manager and I am updating the flags in the /etc/kubernetes/manifests/kube-controller-manager.yaml file. But the changes that I am adding are not taking effect. The kubelet is detecting changes to the file and is restarting the pods but once restarted they come back with the old flags.

Any ideas?

devops84uk
  • 691
  • 2
  • 6
  • 20

2 Answers2

1

So it seems that any file under /etc/kubernetes/manifests is loaded by the kubelet. So when I was adding the new flags I was taking a backup of the existing file with a .bak extension but kubelet was still loading the .bak file instead of the new .yaml file. Seems to me thats a bug. Anyways, happy to have spotted the error.

devops84uk
  • 691
  • 2
  • 6
  • 20
0

The reason behind that is the container is started with the old flags, when you pass new flags and kube-controller-manager pod is restarted(pod restart doesn't mean container restart) but the kube-controller-manager container is still using old flags.

Check it using following command:

docker ps --no-trunc | grep "kube-controller-manager --"
dcc828aa22aae3c6bb3c4ba31d0cfcac669b9c47e4cf50af580ebbb334bfea9f   sha256:40c8d10b2d11cbc3db2e373a5ffce60dd22dbbf6236567f28ac6abb7efbfc8a9                                          "kube-controller-manager --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --pod-eviction-timeout=30s --leader-elect=true --use-service-account-credentials=true --controllers=*,bootstrapsigner,tokencleaner --root-ca-file=/etc/kubernetes/pki/ca.crt --address=127.0.0.1 --kubeconfig=/etc/kubernetes/controller-manager.conf --service-account-private-key-file=/etc/kubernetes/pki/sa.key --allocate-node-cidrs=true --cluster-cidr=192.168.13.0/24 --node-cidr-mask-size=24"                                                  

Once you update the flags in /etc/kubernetes/manifests/kube-controller-manager.yaml file, restart the docker container of kube-controller-manager and changes will take effect. You can use following command to restart the kube-controller-manager container:

docker restart $(docker ps --no-trunc | grep "kube-controller-manager --" | awk '{print $1}')

Hope this helps.

Prafull Ladha
  • 12,341
  • 2
  • 37
  • 58
  • That doesnt work. Has no effect on the flags. What I have observed is that kubelet seems to be off grid and loading the config from in memory or something. It detects a change to the yaml file but doesnt load the content and continues deploying controller-manager with the same flags. – devops84uk Nov 26 '18 at 13:56
  • Could you specify which flags you're trying to change? – Prafull Ladha Nov 26 '18 at 14:05
  • node-monitor-grace-period and the likes. I ve removed existing flags and also for testing changed the version of the image, but the pod still comes back with the same old values. Strange!!! – devops84uk Nov 26 '18 at 14:09