I have a CronJob which is defined to forbid concurrency (concurrencyPolicy: Forbid
). Now the Pod which is launched by the Job by the CronJob itself spawns a Pod (which is out of my control) which in turn also may or may not spawn a Pod.
··········> ··········> time ··········> ··········>
CronJob
|
+-----------------------+----····
| |
schedules schedules
| |
v v
first Pod ~~~> Exit. first Pod ~~~> Exit.
| |
spawns spawns
| |
| SECOND COMING OF SECOND POD CONCURRENT WITH FIRST
v
second Pod ~~~~~~ still running ~~~~~~>
Now the first Pod exits fairly quickly leaving the CronJob controller under the impression that the Job is done. But the job is actually still running, as it spawned a Pod. So the next time the Job is scheduled by the CronJob it may spawn a Pod which runs concurrently to the other Pods spawned by the first scheduled Job. Which is exactly what I am trying to prevent.
Is there a way that my first pod (I have full control over the first pod, except of the fact that it spawns a pod and I do not control what's going on in that pod or how it is spawned, but I can set labels and annotations) somehow adds the consecutively spawned pods to the current Job such that it only is deemed finished once these pods return?
My current approach is that I am checking for these spawned pods myself, but it's quite tedious and I'd prefer if there was a solution in kubernetes already. Also it would simply my infrastructure as the garbage collector which cleans the job remains would then also clean up these pods.