6

I'd like to unify Jenkins job generation process and want to have one single file that I would pass to Process Job DSLs step.

So I could vary files that included in current release of DSL. Say I have these files:

..
Release.groovy
FirstPipeline.groovy
SecondPipeline.groovy

And I want Release.groovy to include either both pipelines or maybe just single pipeline.

Inside pipeline files there is no Class structure, so I cannot import them as if they were a library. It has just something like this:

import mylibs.jobs.UsefulJob1
import mylibs.jobs.UsefulJob2 
import mylibs.jobs.FirstPipeline

def firstPipeline = new FirstPipeline()

multiJob(firstPipeline.name) {
  // multijob steps
}

I tried to use evaluate, but it turned out that it works only for simple scripts. And if you use it with more complex hierarchy of imported libraries and meta programming it fails with hardly interpretable errors.

StephenKing
  • 36,187
  • 11
  • 83
  • 112
Misha Slyusarev
  • 1,353
  • 2
  • 18
  • 45
  • 1
    For those who wondering what's the answer. I ended up with refactoring these scripts into classes. Then you can simply import them. – Misha Slyusarev Aug 18 '16 at 12:19

2 Answers2

6

For those who are wondering what's the answer. I ended up with refactoring these scripts into classes. And in my case firstPipeline looks something like this:

package pipelines

import mylibs.jobs.UsefulJob1
import mylibs.jobs.UsefulJob2 

import javaposse.jobdsl.dsl.DslFactory

class FirstPipeline {

  public FirstPipeline(DslFactory dslFactory) {
    dslFactory.multiJob('first_pipeline') {
      // multijob steps
    }
  }
}

Then you can gather them all in one file:

import pipelines.FirstPipeline
import pipelines.SecondPipeline

new FirstPipeline(this)
new SecondPipeline(this)
Misha Slyusarev
  • 1,353
  • 2
  • 18
  • 45
0

I had a very similar problem recently and came up with a solution (see below). It's maybe not the simplest one. Putting code into real classes might be better, but then you have to compile the classes before you can start the DSL script. My solution is purely script based.

In the script that gets directly called by the DSL plugin (and which hence has access to the DSL API and to the workspace where all the scripts are checked out), I define a method 'loadScript'. This method reads in the specified file and evaluates it. But before calling 'evaluate', it creates a binding and puts some object there. Among others, the reference to the DSL API and the reference to the 'loadScript' method. Then, when the script being evaluated is performed, it magically can use the DSL API and the loadScript method.

It's important that the last statement in the loaded script is 'return this'.

Then you can write e.g. the following:

rootScript.groovy:

def firstPipelineCreator = loadScript("firstPipelineCreator")
firstPipelineCreator.createPipeline()

The magics is in the script which is directly referenced in the 'Process DSL' step:

rootDir = "" + SEED_JOB.workspace
jobDsl = this

def loadScript(String scriptName) {
    // Create the binding and put there varaibles/methods that will be available
    // in every script that has been loaded via loadScript
    scriptBindings = new Binding(this.binding.variables)
    scriptBindings.setVariable("jobDsl", jobDsl)
    scriptBindings.setVariable("rootDir", rootDir)
    scriptBindings.setVariable("loadScript", this.&loadScript)
    scriptBindings.setVariable("logInfo", this.&logInfo)
    scriptBindings.setVariable("logDebug", this.&logDebug)
    shell = new GroovyShell(scriptBindings)

    logDebug "Loading script '" + scriptName + ".groovy" + "'"

    script = shell.parse(new File(rootDir, scriptName + ".groovy"))
    return script.run() // The script should have 'return this' as the last statement
}

def logInfo(text) {
    println "[INFO ] " + text
}

def logDebug(text) {
    println "[DEBUG] " + text
}
fml2
  • 190
  • 11