8

I'm using something like this to run tests in parallel:

stage('Test') {
  steps {
    script {
      testing_closures = [one: { print("staring one"); sleep 10; print("finishing one") },
                          two: { print("staring two"); sleep 10; print("finishing two") },
                          three: { print("staring three"); sleep 10; print("finishing three") },
                          four: { print("staring four"); sleep 10; print("finishing four") },
                          five: { print("staring five"); sleep 10; print("finishing five") },
                          six: { print("staring six"); sleep 10; print("finishing six") }]
      parallel(testing_closures)
    }
  }
}

The main goal is to throttle those closures - I don't want for all six of them to run concurrently - only 3 at a time. And I want to be able to run another build of this, which will also run all of those closures, but only 3 simultaneously.

I was thinking about using nodes for this - i.e. wrapping each closure in node{} block:

one: { node { print("staring one"); sleep 10; print("finishing one") } }

Works OK as long as I use master node and limit the executors to 4 (1 for main job, 3 for the node{} concurrent steps).

Unfortunately I need master node executors to be available for other jobs (and other builds of the job in question), so I cannot limit them.

The only solution I could think of is to use Lockable Resources in following manner:

  1. Dynamically create 3 lockable resources via LockableResourcesManager::createResourceWithLabel() with build-unique labels

  2. Lock them by label in all of the closures

  3. The closures will wait for each other to finish and only 3 at the time would be running.

  4. ...and now I'm stuck. I could not find any method to delete the resources. I only found an open bug for quite similar issue. EDIT: I created improvement request for it.

Even if there is a method to delete the resources, this solution looks dirty and adds unnecessary resources that may not clean up if something fails.

So - how do I achieve my goal? Is there a way to throttle parallel step?

Mirek
  • 355
  • 4
  • 16
  • Any updates? I am still looking for one. – sorin Oct 03 '17 at 14:37
  • 1
    @sorin: I created [JENKINS-46236](https://issues.jenkins-ci.org/browse/JENKINS-46236) which turned out to be a duplicate of [JENKINS-44085](https://issues.jenkins-ci.org/browse/JENKINS-44085) – Mirek Oct 19 '17 at 09:41
  • a solution currently doesn't exist still afaik - it's easier to just split up the parallel run (in your case, your list when calling parallel by adding key) into two pieces, and execute them one after another. – Onur Gokkocabas Sep 11 '18 at 20:08
  • 1
    JFYI: the workaround is in the ticket https://issues.jenkins-ci.org/browse/JENKINS-44085?focusedCommentId=354852&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-354852 – kivagant Nov 29 '18 at 11:46

1 Answers1

0

You could definitely do that with LockableResources Plugin, just define 3 resources for a given label, and use the quantity needed for each critical step to be 1. (otherwise it would require all the resources from the labels given)

node('slave') {
    def execs = [:]
    execs[1] = {
        lock(label: 'Win81x64Pool', quantity: 1, variable: "MY_VAR") {
            println "LOCKED=" + env.MY_VAR
            sleep(3)
       }
    }
    execs[2] = {
        lock(label: 'Win81x64Pool', quantity: 1, variable: "MY_VAR") {
            println "LOCKED=" + env.MY_VAR
            sleep(3)
        }
    }
    execs[3] = {
        lock(label: 'Win81x64Pool', quantity: 1, variable: "YOUR_VAR") {
            println "LOCKED=" + env.YOUR_VAR
            sleep(3)
        }
    }
    parallel execs
}
hakamairi
  • 4,464
  • 4
  • 30
  • 53