25

My helm chart has some 12 PODS. When I did helm upgrade after changing some values all the PODs are restarted except for one.

My question is:

Will helm upgrade restart the PODS even if they are not affected by upgrade?

Putting it in another way:

Is it helm upgrade restart PODs only if they are affected by upgrade?

Chandu
  • 1,837
  • 7
  • 30
  • 51

6 Answers6

51

The flag --recreate-pods was marked as deprecated in Helm 2 and was removed with Helm 3.

Helm suggests to either add checksums of other files which could have changed like this

      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}

or to add an annotation with a random number which forces an update on each rollout:

      annotations:
        rollme: {{ randAlphaNum 5 | quote }}

See Helm docs: https://v3.helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments

Datz
  • 3,156
  • 3
  • 22
  • 50
  • 2
    When i add `rollme: {{ randAlphaNum 5 | quote }}` in all my deployments, `helm upgrade` restarts all deployments – Yunus Einsteinium Aug 02 '20 at 18:59
  • So, if I understand if a chart don't do that and you just use the chart you need to manually use kubectl to restart or kill all the pods ? – Fl_ori4n Jun 29 '23 at 16:45
  • Using `randAlphaNum 5` there's a 20% chance that the same number gets generated and therefore pods don't restart. – Alex Aug 17 '23 at 10:25
16

As far as I am concerned helm restart only the pods which are affected by upgrade

If You want to restart ALL pods you can use --recreate-pods flag

--recreate-pods -> performs pods restart for the resource if applicable

For example if You have dashboard chart, You can use this command to restart every pod.

helm upgrade --recreate-pods -i k8s-dashboard stable/k8s-dashboard

There is a github issue which provide another workaround for it

Every time you need to restart the pods, change the value of that annotation. A good annotation could be timestamp

First, add an annotation to the pod. If your chart is of kind Deployment, add annotation to spec.template.metadata.annotations. For example:

kind: Deployment
spec:
  template:
    metadata:
      labels:
        app: ecf-helm-satellite-qa
      annotations:
        timestamp: "{{ .Values.timestamp }}"

Deploy that. Now, every time you set timestamp in helm command. Kubernetes will rollout a new update without downtime.

helm upgrade ecf-helm-satellite-qa . --set-string timestamp=a_random_value
Community
  • 1
  • 1
Jakub
  • 8,189
  • 1
  • 17
  • 31
15

--recreate-pods has been removed in helm 3 and that certainly got the attention of some helm users.

I force the pods to be recreated using a timestamp in the deployment pod spec. Note that it has to be in the spec, this will not work at the deployment top level:

spec:
  template:
    metadata:
      annotations:
        releaseTime: {{ dateInZone "2006-01-02 15:04:05Z" (now) "UTC"| quote }}
Andy Brown
  • 11,766
  • 2
  • 42
  • 61
2

you need delete the job first, and run

helm history <release_name>
helm rollback <release_name> <number> --recreate-pods
张馆长
  • 1,321
  • 10
  • 11
1

I have defined a template helper using the recommendations, also adding it here so that I won't forget :)

{{- define "vtd2-consumer.annotations" -}}
{{ .Values.podAnnotations | toYaml }}
releaseTime: {{ dateInZone "2006-01-02 15:04:05Z" (now) "UTC"| quote }}
rollme: {{ randAlphaNum 5 | quote }}
checksum/config: {{ include (print $.Template.BasePath "/config.yaml") . | sha256sum }}
{{- end }}

Remove decide which method you like releaseTime, rollme or checksum/config, pretty sure you don't need to have both.

Danie
  • 386
  • 3
  • 8
0
  --force        force resource updates through a replacement strategy

https://helm.sh/docs/helm/helm_upgrade/

Jason
  • 47
  • 4