You have set schedule: "* * * * *"
which means, job will be create each minute.
concurrencyPolicy: "Forbid" is working as described.
The cron job does not allow concurrent runs; if it is time for a new job run and the previous job run hasn't finished yet, the cron job skips the new job run
Meaning, it will not allow to create new job if there will be still Unfinished job. If the job was finished, then concurrencyPolicy
will allow to create another one. It will not allow to run 2 jobs which are Unfinished.
activeDeadlineSeconds:
as per Kubernetes docs
The activeDeadlineSeconds applies to the duration of the job, no matter how many Pods are created. Once a Job reaches activeDeadlineSeconds, all of its running Pods are terminated and the Job status will become type: Failed with reason: DeadlineExceeded.
Also as mentioned in Jobs cleanup policy.
If the Jobs are managed directly by a higher level controller, such as CronJobs, the Jobs can be cleaned up by CronJobs based on the specified capacity-based cleanup policy.
To test I've used busybox
with sleep 20
command as I don't know exactly what your job is doing.
Meaning, if you keep your default settings
spec:
failedJobsHistoryLimit: 3
successfulJobsHistoryLimit: 1
It will keep successful
job till the next one will be created and will keep it for a while if you would like to check logs etc.
$ kubectl get cronjob,job,pod
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/etl-table-feed-from-schema-vtex-to-schema-sale-all * * * * * False 1 17s 51s
NAME COMPLETIONS DURATION AGE
job.batch/etl-table-feed-from-schema-vtex-to-schema-sale-all-1593018780 0/1 14s 14s
NAME READY STATUS RESTARTS AGE
pod/etl-table-feed-from-schema-vtex-to-schema-sale-all-1593018h9pnh 1/1 Running 0 13s
---
$ kubectl get cronjob,job,pod
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/etl-table-feed-from-schema-vtex-to-schema-sale-all * * * * * False 1 33s 2m7s
NAME COMPLETIONS DURATION AGE
job.batch/etl-table-feed-from-schema-vtex-to-schema-sale-all-1593018780 1/1 23s 90s
job.batch/etl-table-feed-from-schema-vtex-to-schema-sale-all-1593018840 1/1 21s 29s
NAME READY STATUS RESTARTS AGE
pod/etl-table-feed-from-schema-vtex-to-schema-sale-all-1593018h9pnh 0/1 Completed 0 89s
pod/etl-table-feed-from-schema-vtex-to-schema-sale-all-1593018k7b58 0/1 Completed 0 29s
---
$ kubectl get cronjob,job,pod
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/etl-table-feed-from-schema-vtex-to-schema-sale-all * * * * * False 0 34s 2m8s
NAME COMPLETIONS DURATION AGE
job.batch/etl-table-feed-from-schema-vtex-to-schema-sale-all-1593018840 1/1 21s 30s
NAME READY STATUS RESTARTS AGE
pod/etl-table-feed-from-schema-vtex-to-schema-sale-all-1593018k7b58 0/1 Completed 0 30s
However if you will set successfulJobsHistoryLimit
to 0 it will remove job after a while, even before next scheduled job.
spec:
failedJobsHistoryLimit: 3
successfulJobsHistoryLimit: 0
Output:
$ kubectl get cronjob,job,pod
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/etl-table-feed-from-schema-vtex-to-schema-sale-all * * * * * False 1 18s 31s
NAME COMPLETIONS DURATION AGE
job.batch/etl-table-feed-from-schema-vtex-to-schema-sale-all-1593018540 0/1 15s 15s
NAME READY STATUS RESTARTS AGE
pod/etl-table-feed-from-schema-vtex-to-schema-sale-all-15930182r5bn 1/1 Running 0 15s
---
$ kubectl get cronjob,job,pod
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/etl-table-feed-from-schema-vtex-to-schema-sale-all * * * * * False 1 31s 44s
NAME COMPLETIONS DURATION AGE
job.batch/etl-table-feed-from-schema-vtex-to-schema-sale-all-1593018540 1/1 22s 28s
NAME READY STATUS RESTARTS AGE
pod/etl-table-feed-from-schema-vtex-to-schema-sale-all-15930182r5bn 0/1 Completed 0 28s
---
$ kubectl get cronjob,job,pod
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/etl-table-feed-from-schema-vtex-to-schema-sale-all * * * * * False 0 34s 47s
This time also depends on job duration.
Also if the job completed successfully (exit code 0), then pod will change status to completed and it will no longer use cpu/memory resurces.
You can also read about TTL Mechanism, but unfortunately I don't think it would work here as Master is managed by google and this feature would require to add some flags in Kubelet Feature Gates
.