just wondering if there is any decent workaround for long-running jobs that won't block upgrading / installing Helm Release.
Let's consider this - we have a job that is for example syncing S3 buckets and it is taking 8 hours to complete. Job specified in Helm chart upon creating will take that 8 hours causing the whole upgrade / install (in my case via FluxCD) to timeout. Changing timeout for 8 hours is also not very good idea - we will have a hanging worker for almost half a day!
Straight to the point; what is your workaround to get past that issue?
I tried some helm hooks, and using not a Job but Pod and it is still the same issue.
Here is how header of the Job looks like:
{{- if .Values.clone.bucket.sync.enabled -}}
{{- if .Release.IsInstall -}}
apiVersion: batch/v1
kind: Job
metadata:
name: sync-s3-buckets
namespace: {{ .Release.Namespace }}
spec:
ttlSecondsAfterFinished: 604800
template:
spec:
serviceAccountName: {{ .Values.clone.bucket.serviceAccount.name }}
initContainers:
- name: wait-for-s3-sync
image: "bitnami/kubectl:1.25.6"