2

I am trying to add the two flags below to apiserver in the /etc/kubernetes/manifests/kube-apiserver.yaml file:

spec:
   containers:
   - command:
     - kube-apiserver
     - --enable-admission-plugins=NodeRestriction,PodNodeSelector
     - --admission-control-config-file=/vagrant/admission-control.yaml

[...]

I am not mounting a volume or mount point for the /vagrant/admission-control.yaml file. It is completely accessible from the node master, since it is shared by the VM created by vagrant:

vagrant@master-1:~$ cat /vagrant/admission-control.yaml 
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: PodNodeSelector
  path: /vagrant/podnodeselector.yaml
vagrant@master-1:~$

Kubernetes version:

vagrant@master-1:~$ kubectl version

Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}

Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25:06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}

Link to the /etc/kubernetes/manifests/kube-apiserver.yaml file being used by the running cluster Here

vagrant@master-1:~$ kubectl delete pods kube-apiserver-master-1 -n kube-system

pod "kube-apiserver-master-1" deleted

Unfortunately "kubectl describe pods kube-apiserver-master-1 -n kube-system" only informs that the pod has been recreated. Flags do not appear as desired. No errors reported.

Any suggestion will be helpful,

Thank you.

NOTES:

  1. I also tried to make a patch on the apiserver's configmap. The patch is applied, but it does not take effect in the new running pod.
  2. I also tried to pass the two flags in a file via kubeadm init --config, but there is little documentation on how to put these two flags and all the other ones of the apiserver that I need in a configuration file in order to reinstall the master node.

UPDATE:

I hope that be useful for everyone facing the same issue...

After 2 days of searching the internet, and lots and lots of tests, I only managed to make it work with the procedure below:

sudo tee ${KUBEADM_INIT_CONFIG_FILE} <<EOF
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: "${INTERNAL_IP}"
  bindPort: 6443
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: ${KUBERNETES_VERSION}
controlPlaneEndpoint: "${LOADBALANCER_ADDRESS}:6443"
networking:
  podSubnet: "10.244.0.0/16"
apiServer:
  extraArgs:
    advertise-address: ${INTERNAL_IP}
    enable-admission-plugins: NodeRestriction,PodNodeSelector
    admission-control-config-file: ${ADMISSION_CONTROL_CONFIG_FILE}
  extraVolumes:
    - name: admission-file
      hostPath: ${ADMISSION_CONTROL_CONFIG_FILE}
      mountPath: ${ADMISSION_CONTROL_CONFIG_FILE}
      readOnly: true
    - name: podnodeselector-file
      hostPath: ${PODNODESELECTOR_CONFIG_FILE}
      mountPath: ${PODNODESELECTOR_CONFIG_FILE}
      readOnly: true
EOF


sudo kubeadm init phase control-plane apiserver --config=${KUBEADM_INIT_CONFIG_FILE}
Francisco
  • 21
  • 2

1 Answers1

1

You need to create a hostPath volume mount like below

volumeMounts:
- mountPath: /vagrant
  name: admission
  readOnly: true
...
volumes:
- hostPath:
    path: /vagrant
    type: DirectoryOrCreate
  name: admission
Arghya Sadhu
  • 41,002
  • 9
  • 78
  • 107
  • Hi, thank you by your suggestion. Any change in the /etc/kubernetes/manifests/kube-apiserver.yaml file does not take effect. I delete the kube-apiserver pod, but the 'kubectl get -n kube-system pod kube-apiserver-master-1 -o yaml' command does not show any change. – Francisco Apr 30 '21 at 17:15
  • I got a workaround. I will update the description. – Francisco Apr 30 '21 at 17:18