0

just wondering if there is any decent workaround for long-running jobs that won't block upgrading / installing Helm Release.

Let's consider this - we have a job that is for example syncing S3 buckets and it is taking 8 hours to complete. Job specified in Helm chart upon creating will take that 8 hours causing the whole upgrade / install (in my case via FluxCD) to timeout. Changing timeout for 8 hours is also not very good idea - we will have a hanging worker for almost half a day!

Straight to the point; what is your workaround to get past that issue?

I tried some helm hooks, and using not a Job but Pod and it is still the same issue.

Here is how header of the Job looks like:

{{- if .Values.clone.bucket.sync.enabled -}}
{{- if .Release.IsInstall -}}
apiVersion: batch/v1
kind: Job
metadata:
  name: sync-s3-buckets
  namespace: {{ .Release.Namespace }}

spec:
  ttlSecondsAfterFinished: 604800
  template:
    spec:
      serviceAccountName: {{ .Values.clone.bucket.serviceAccount.name }}
      initContainers:
      - name: wait-for-s3-sync
        image: "bitnami/kubectl:1.25.6"
  • If you create an ordinary Job, not as a Helm hook, then I wouldn't expect it to block the deployment. (I also wouldn't expect it to re-run on `helm upgrade`.) Do you have a more specific example of what the file header of the Job looks like? Can you create the Job some other way? – David Maze Jun 05 '23 at 17:02
  • I've added header to the post. – oskarfightingthecode Jun 05 '23 at 17:19

0 Answers0