I have set the Kubernetes cronJob to prevent concurrent runs like here using parallelism: 1
, concurrencyPolicy: Forbid
, and parallelism: 1
. However, when I try to create a cronJob manually I am allowed to do that.
$ kubectl get cronjobs
...
$ kubectl create job new-cronjob-1642417446000 --from=cronjob/original-cronjob-name
job.batch/new-cronjob-1642417446000 created
$ kubectl create job new-cronjob-1642417446001 --from=cronjob/original-cronjob-name
job.batch/new-cronjob-1642417446001 created
I was expecting that a new cronjob would not be created. Or it could be created and fail with a state that references the concurrencyPolicy
. If the property concurrencyPolicy
is part of the CronJob spec, not the PodSpec, it should prevent a new job to be created. Why it does not?
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronjob-name
annotations:
argocd.argoproj.io/sync-wave: "1"
spec:
schedule: "0 * * * *"
suspend: false
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 3
concurrencyPolicy: Forbid
jobTemplate:
spec:
parallelism: 1
completions: 1
backoffLimit: 3
template:
spec:
restartPolicy: Never
After reading the official documentation about the kubectl create -f
I didn't find a way to prevent that. Is this behavior expected? If it is, I think I should check inside my Docker image (app written in Java) if there is already a cronjob running. How would I do that?