I've have CronJob object specified to perform a job.
This is the YAML File I've used. I used to publish new image via a CI/CD pipeline.
apiVersion: batch/v1
kind: CronJob
metadata:
name: name-XXX-XXX
labels:
app: name-XXX-XXX
spec:
schedule: "*/3 * * * *"
concurrencyPolicy: Forbid
startingDeadlineSeconds: 200
successfulJobsHistoryLimit: 0
failedJobsHistoryLimit: 0
suspend: false
jobTemplate:
spec:
template:
spec:
nodeSelector:
cloud.google.com/gke-nodepool: xxx-pool
terminationGracePeriodSeconds: 60
containers:
- name: name-XXX-XXX-container
imagePullPolicy: Always
image: image-name
envFrom:
- configMapRef:
name: name-config
- configMapRef:
name: name-config
- secretRef:
name: name-config-secret
- secretRef:
name: name-config-secret
- secretRef:
name: name-config-secret
- secretRef:
name: common-secret
restartPolicy: OnFailure
Whenever, a image is published the pod still runs with old image.
I've tried, if possible to set restartPolicy: Always
similar to deployment object. But on the pipeline,
The CronJob "name-XXX" is invalid: spec.jobTemplate.spec.template.spec.restartPolicy: Required value: valid values: "OnFailure", "Never" was thrown and not deployed.
I need to automatically restart the pod to run with new image of cronjob.
Any help would appreciated!.