I'd like to know how kubernetes CronJob
chooses the job to run when there are multiple waiting jobs.
It is not FIFO, but LIFO?
Here is the settings of my experiment.
- Kubernetes Server Version 1.21.5
- 1 node in kubernetes cluster
- limit 3 pods per node by setting
ResourceQuota
to namespace
I scheduled 9 CronJobs
(cronjob1
..cronjob9
) with different name.
Each job is like the followings:
- it takes 130 sec (just sleep)
schedule: */2 * * * *
concurrencyPolicy: Forbid
startingDeadlineSeconds: 3000
successfulJobsHistoryLimit: 0
failedJobsHistoryLimit: 1
Here is the result.
- First, 3
CronJobs
, sayjob1
,job2
,job3
, become running. Which 3 seems random. - Since each job takes 130 sec to finish, next schedule timing came.
- After
job1
,job2
,job3
finished, the same tasksjob1
,job2
,job3
are started. job4
-job9
are never executed.
Update
- My cluster has only single node.
- Kubernetes on Docker Desktop for Mac
- Here're files for limiting resource.
namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: cron-job-ns
resource_quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: limit-number-of-pods
namespace: cron-job-ns
spec:
hard:
count/pods: "3"