2

I have a project to create a mutating webhook in the kube-system namespace, which needs to exclude webhook server deployment namespaces.

But the kube-system namespace has been created. How do I attach the required labels to it using Helm?

David Maze
  • 130,717
  • 29
  • 175
  • 215
moluzhui
  • 1,003
  • 14
  • 34

3 Answers3

4

Helmfile offers hooks which are pretty neat for that:

releases:
- name: istio-ingress
  namespace: istio-ingress
  chart: istio/gateway
  wait: true
  hooks:
    - events:
        - presync
      showlogs: true
      command: sh
      args:
        - -c
        - "kubectl create namespace istio-ingress --dry-run=client -o yaml | kubectl apply -f -"
    - events:
        - presync
      showlogs: true
      command: sh
      args:
        - -c
        - "kubectl label --dry-run=client -o yaml --overwrite namespace istio-ingress istio-injection=enabled | kubectl apply -f -"
zemicolon
  • 152
  • 1
  • 13
3

Since helm doesn't support managing namespaces directly (see: Helm 3 doesn't create namespace #5753), the "correct" way to do this is with a chart hook:

  1. Create a service account for the namespace-labeling job:
apiVersion: v1
kind: ServiceAccount
metadata:
  name: {{ include "chart.serviceAccountName" . }}
  labels:
    {{- include "chart.labels" . | nindent 4 }}
  annotations:
    {{- toYaml .Values.serviceAccount.annotations | nindent 4 }}
  1. Create a role with the appropriate permissions:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: label-ns
rules:
- apiGroups: [""]
  resources: ["namespaces"]
  resourceNames: [{{ .Release.Namespace }}]
  verbs: ["get", "patch"]
  1. Bind the service account to the role:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: {{ .Release.Namespace }}:label-ns
subjects:
  - kind: ServiceAccount
    name: {{ include "chart.serviceAccountName" . }}
    namespace: {{ .Release.Namespace }}
roleRef:
  kind: Role
  name: label-ns
  apiGroup: rbac.authorization.k8s.io
  1. Create your namespace-labeling job, making sure to use the appropriate serviceAccountName:
apiVersion: batch/v1
kind: Job
metadata:
  name: label-ns
  labels:
    app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
    app.kubernetes.io/instance: {{ .Release.Name | quote }}
    app.kubernetes.io/version: {{ .Chart.AppVersion }}
    helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
  annotations:
    "helm.sh/hook": post-install
    "helm.sh/hook-delete-policy": hook-succeeded
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
        app.kubernetes.io/instance: {{ .Release.Name | quote }}
        helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
    spec:
      restartPolicy: Never
      serviceAccountName: {{ include "chart.serviceAccountName" . }}
      containers:
      - name: label-ns
        image: "bitnami/kubectl:latest"
        command:
          - kubectl
          - label
          - ns
          - {{ .Release.Namespace }}
          - foo=bar
Mike Conigliaro
  • 1,144
  • 1
  • 13
  • 28
  • Kubernetes Batch Job isn't helm either and will consume additional resources each time it runs.. :D – zemicolon May 18 '23 at 15:09
  • @zemicolon false. [Chart hooks](https://helm.sh/docs/topics/charts_hooks/) are a built-in feature of Helm, and the jobs only consume additional resources when you tell them to run (via the `helm.sh/hook` annotation). – Mike Conigliaro May 19 '23 at 17:08
0

Since the kube-system namespace is a core part of Kubernetes (every cluster has it preinstalled and some core components run there) Helm can't manage it.

Some possible things you could do instead:

  • Make the per-namespace labels opt-in, not opt-out; only apply the webhook in namespaces where the label is present, rather than in every namespace except flagged ones. (Istio's sidecar injector works this way.)
  • Exclude kube-system as a special case in the code.
  • Manually run kubectl label namespace outside of Helm.
  • Make your larger-scale deployment pipeline run the kubectl command (for example, if you have a Jenkins build that installs the webhook, also make it set the label).
David Maze
  • 130,717
  • 29
  • 175
  • 215
  • 1
    [pre-hook](https://gist.github.com/kvudata/12ba57ae1e7f01799aaa7f36350a9b2e) seem to achieve, but it requires a specific `ServiceAccount` to operate on the namespace, and Helm3 does not have tiller – moluzhui Nov 15 '21 at 02:10
  • 1
    That link is a neat trick (running an in-cluster Job to run an imperative `kubectl` command). You could create a ServiceAccount, Role, and RoleBinding all also annotated as Helm hooks, if you wanted to go with that approach. – David Maze Nov 15 '21 at 11:22
  • I used the method you said, I expected to add labels on create release and remove labels on delete, but there seems to be some problems with that. I have provided information on another [issue](https://stackoverflow.com/questions/69987836/why-does-helm3-install-trigger-pre-delete-and-not-in-helm2) and look forward to your help. – moluzhui Nov 16 '21 at 11:07