7

What is modern best practice for multi-configuration builds (with Jenkins)?

I want to support multiple branches and multiple configurations.

For example for each version V1, V2 of the software I want builds targeting platforms P1 and P2.

We have managed to set up multi-branch declarative pipelines. Each build has its own docker so its easy to support multiple platforms.

pipeline { 
    agent none 
    stages {
        stage('Build, test and deploy for P1) {
            agent {
                dockerfile {
                   filename 'src/main/docker/Jenkins-P1.Dockerfile'
                }
            }
            steps {
               sh buildit...
            }
        }
        stage('Build, test and deploy for P2) {
            agent {
                dockerfile {
                   filename 'src/main/docker/Jenkins-P2.Dockerfile'
                }
            }
            steps {
               sh buildit...
            }
        }
    }
}

This gives one job covering multiple platforms but there is no separate red/blue status for each platform. There is good argument that this does not matter as you should not release unless the build works for all platforms.

However, I would like a separate status indicator for each configuration. This suggests I should use a multi-configuration build which triggers a parameterised build for each configuration as below (and the linked question):

pipeline { 
    parameters {
      choice(name: 'Platform',choices: ['P1', 'P2'], description: 'Target OS platform', )
    }
    agent {
       filename someMagicToGetDockerfilePathFromPlatform()
    }
    stages {
        stage('Build, test and deploy for P1) {
            steps {
               sh buildit...
            }
        }
    }
}

There are several problems with this:

  • A declarative pipeline has more constraints over how it is scripted
  • Multi-configuration builds cannot trigger declarative pipelines (even with the parameterized triggers plugin I get "project is not buildable").

This also begs the question what use are parameters in declarative pipelines?

Is there a strategy that gives the best of both worlds i.e:

  • pipeline as code
  • separate status indicators
  • limited repetition?
Bruce Adams
  • 4,953
  • 4
  • 48
  • 111
  • What's the deal with P1/P2 choice? Is that a choice? Is that a user input? Are there any common parts for building / docker image creation? – hakamairi May 23 '19 at 08:25
  • I want a build for each supported platform. The naive way of doing it above does it sequentially as part of a single Jenkins job. An older (i.e. pre pipelines) way of doing things would have a parameterised build job which is invoked twice. – Bruce Adams May 23 '19 at 09:32
  • What's the difference between the platforms build wise? – hakamairi May 23 '19 at 12:39
  • I'm not sure it matters for the sake of this question, which is pretty general. I'm assuming they can be created via their own docker files. There are differences in package management such as use of apt or yum. But more generally, imagine you were creating a build system for something like gcc. What would your jenkins configuration need to look like to be maintainable? – Bruce Adams May 23 '19 at 12:58
  • I was just wondering why is there a choice, if there are any parts between the jobs that are common for both etc. – hakamairi May 23 '19 at 19:34
  • Typically we have build systems that try to configure themselves for the environment they run one. So the steps for each platform are mostly the same. cmake to build, ctest to test for example. Deployment differs as you may have RPMs for one platform and .deb for another. – Bruce Adams May 24 '19 at 12:43
  • Should the whole build fail if any of the platforms fail? – hakamairi May 25 '19 at 17:04
  • Hi @bruce-adams what happened with your bounty? – hakamairi May 31 '19 at 12:19

2 Answers2

2

This is a partial answer. I think others with better experience will be able to improve on it.

This is currently untested. I may be barking up the wrong tree. Please comment or add a better answer.

So something like the following:

    def build(string platform) {
       switch(platform) {
         case P1:
            dockerFile = 'foo'
            indicator = 'build for foo'
            break
         case P2:
            dockerFile = 'bar'
            indicator = 'build for bar'
            break
       }
       pipeline {
         agent {
            dockerfile {
               filename "$dockerFile"
            }
            node { 
               label "$indicator"
            }
         }
         stages {
           steps {
             echo "build it"
           }
         }
       }
    }
  • The relevant code could be moved to a shared library (even if you don't actually need to share it).
Bruce Adams
  • 4,953
  • 4
  • 48
  • 111
0

I think the cleanest approach is to have this all in a pipeline similar to the first one you presented, the only modification I would see here is making those parallel, so you would actually try and build/test for both platforms.

To reuse the previous stage's workspace you could do: reuseNode true

Something similar to this flow, that would have parallel build for platforms enter image description here

pipeline { 
    agent 'docker'
    stages {
      stage('Common pre') { ... }
      stage('Build all platforms') {
      parallel {
        stage('Build, test and deploy for P1') {
            agent {
                dockerfile {
                   filename 'src/main/docker/Jenkins-P1.Dockerfile'
                   reuseNode true
                }
            }
            steps {
               sh buildit...
            }
        }
        stage('Build, test and deploy for P2') {
            agent {
                dockerfile {
                   filename 'src/main/docker/Jenkins-P2.Dockerfile'
                   reuseNode true
                }
            }
            steps {
               sh buildit...
            }
        }
      }
      }
      stage('Common post parallel') { ... }
    }
}
hakamairi
  • 4,464
  • 4
  • 30
  • 53
  • I am currently doing it that way. There are some pecularities though. Jenkins creates a separate workspace for each platform corresponding to the agent I think rather than the parallel block. So you need to duplicate stages that set up dependencies, deploy artifacts or publish test results. This makes sense as they could in principle be executed on different physical machines. – Bruce Adams May 30 '19 at 08:52
  • Well then, may I suggest `reuseNode true`? updated my asnwer – hakamairi May 30 '19 at 09:22