1

I have a CronJob which is defined to forbid concurrency (concurrencyPolicy: Forbid). Now the Pod which is launched by the Job by the CronJob itself spawns a Pod (which is out of my control) which in turn also may or may not spawn a Pod.

··········> ··········> time ··········> ··········>

CronJob
    |
    +-----------------------+----····
    |                       |
  schedules               schedules
    |                       |
    v                       v
first Pod ~~~> Exit.    first Pod ~~~> Exit.
    |                       |
  spawns                  spawns
    |                       |
    |                        SECOND COMING OF SECOND POD CONCURRENT WITH FIRST
    v                       
second Pod ~~~~~~ still running ~~~~~~>

Now the first Pod exits fairly quickly leaving the CronJob controller under the impression that the Job is done. But the job is actually still running, as it spawned a Pod. So the next time the Job is scheduled by the CronJob it may spawn a Pod which runs concurrently to the other Pods spawned by the first scheduled Job. Which is exactly what I am trying to prevent.

Is there a way that my first pod (I have full control over the first pod, except of the fact that it spawns a pod and I do not control what's going on in that pod or how it is spawned, but I can set labels and annotations) somehow adds the consecutively spawned pods to the current Job such that it only is deemed finished once these pods return?


My current approach is that I am checking for these spawned pods myself, but it's quite tedious and I'd prefer if there was a solution in kubernetes already. Also it would simply my infrastructure as the garbage collector which cleans the job remains would then also clean up these pods.

scravy
  • 11,904
  • 14
  • 72
  • 127

1 Answers1

0

I'm affraid kubernetes won't be able to handle it for you. I would say its rather about the application logic. Kubernetes Pod is not aware in any way that it spawned a different Pod or that such Pod was spawned by it.

The second Pod is spawned by your app running in the first Pod and this app is reaponsible for handling this process. It may watch such Pods and exit only when all of such sub-Pods are terminated. You can compare it with process management on the OS, where the perant process typically doesn't exit before terminating its child processes.

You may also think about implementing container lifecycle hooks so that your parent Pods get notified e.g. when a child Pod finishes its job and terminates.

mario
  • 9,858
  • 1
  • 26
  • 42
  • As I say in. my question, my current approach is to watch for these pods myself. I am wondering though whether I can patch the job resource to also take into account the pods I spawned such that I do not have to do it myself. – scravy Feb 11 '21 at 01:33
  • When you say "Kubernetes Pod is not aware in any way that it spawned a different Pod" do you mean "Kubernetes _Job_ is not aware..."? – scravy Feb 11 '21 at 01:34
  • @WytrzymałyWiktor Does the fact that I did not accept the question as the answer answer your question? – scravy Feb 11 '21 at 01:35