I have a project to create a mutating webhook in the kube-system
namespace, which needs to exclude webhook server deployment namespaces.
But the kube-system
namespace has been created. How do I attach the required labels to it using Helm?
I have a project to create a mutating webhook in the kube-system
namespace, which needs to exclude webhook server deployment namespaces.
But the kube-system
namespace has been created. How do I attach the required labels to it using Helm?
Helmfile offers hooks which are pretty neat for that:
releases:
- name: istio-ingress
namespace: istio-ingress
chart: istio/gateway
wait: true
hooks:
- events:
- presync
showlogs: true
command: sh
args:
- -c
- "kubectl create namespace istio-ingress --dry-run=client -o yaml | kubectl apply -f -"
- events:
- presync
showlogs: true
command: sh
args:
- -c
- "kubectl label --dry-run=client -o yaml --overwrite namespace istio-ingress istio-injection=enabled | kubectl apply -f -"
Since helm doesn't support managing namespaces directly (see: Helm 3 doesn't create namespace #5753), the "correct" way to do this is with a chart hook:
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "chart.serviceAccountName" . }}
labels:
{{- include "chart.labels" . | nindent 4 }}
annotations:
{{- toYaml .Values.serviceAccount.annotations | nindent 4 }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: label-ns
rules:
- apiGroups: [""]
resources: ["namespaces"]
resourceNames: [{{ .Release.Namespace }}]
verbs: ["get", "patch"]
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ .Release.Namespace }}:label-ns
subjects:
- kind: ServiceAccount
name: {{ include "chart.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: Role
name: label-ns
apiGroup: rbac.authorization.k8s.io
serviceAccountName
:apiVersion: batch/v1
kind: Job
metadata:
name: label-ns
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
"helm.sh/hook": post-install
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
restartPolicy: Never
serviceAccountName: {{ include "chart.serviceAccountName" . }}
containers:
- name: label-ns
image: "bitnami/kubectl:latest"
command:
- kubectl
- label
- ns
- {{ .Release.Namespace }}
- foo=bar
Since the kube-system
namespace is a core part of Kubernetes (every cluster has it preinstalled and some core components run there) Helm can't manage it.
Some possible things you could do instead:
kube-system
as a special case in the code.kubectl label namespace
outside of Helm.kubectl
command (for example, if you have a Jenkins build that installs the webhook, also make it set the label).