Amidst a Jenkins job build, using a Groovy script, we can create new jobs dynamically. More on this.
We have a one-master-and-n-slave-nodes architecture.
We create any Jenkins job (say some-pipeline-job
) that gets configured on the master Jenkins obviously.
On triggering the build of this job (some-pipeline-job
), the build can run on any slave node.
Consequences:
1) This job (some-pipeline-job
) build creates a workspace for each build that can run on any slave node
2) This job (some-pipeline-job
) has the Groovy code to create a new dynamic job (say job23
) in runtime, amidst its build
Goal:
Disk management of workspaces of any job build across slave nodes, using a second step mentioned in this procedure, based on some criteria like numberOfDaysOld builds, etc...
1)
Can that second step mentioned in cloudbees-support take care of cleaning workspaces for all the builds of specific job (some-pipeline-job
) run across multiple slave Jenkins nodes?
2)
Does the master Jenkins have information about this dynamic job (job23
) created by some-pipeline-job
, at runtime? How can I ensure that a dynamic job gets tracked (configured) in the master Jenkins?
3)
If yes, can that second step mentioned in cloudbees-support take care of cleaning workspace of job23
build?