10

I have one runner associated with my project to avoid concurrent build. GitLab to process the complete pipeline before start a new one?

concurrent is set to = 1 (config file of the runner)

before_script:
  - echo %CI_COMMIT_SHA%
  - echo %CI_PROJECT_DIR%

stages:
  - createPBLs
  - build
  - package


create PBLs:
  stage: createPBLs
  script: 
    - md "C:\HierBauen\%CI_COMMIT_SHA%\"
    - xcopy /y /s "C:/Bauen" "C:/HierBauen/%CI_COMMIT_SHA%"
    - xcopy /y /s "%CI_PROJECT_DIR%" "C:\HierBauen\%CI_COMMIT_SHA%"
    - cd "C:\HierBauen\%CI_COMMIT_SHA%"
    - ./run_orcascript.cmd
  only:
  - tags
  - master

build:
  stage: build
  script:
  - cd "C:\HierBauen\%CI_COMMIT_SHA%"
  - ./run_pbc.cmd
  only:
  - tags
  except:
  - master

build_master:
  stage: build
  script:
  - cd "C:\HierBauen\%CI_COMMIT_SHA%"
  - ./run_pbcm.cmd
  only:
  - master

package:
  stage: package
  script:
  - cd "C:\HierBauen\%CI_COMMIT_SHA%"
  - ./cpfiles.cmd
  artifacts:
    expire_in: 1 week
    paths:
      - GitLab-Build
    name: "%CI_COMMIT_REF_NAME%"
  only:
  - tags
  - master

Unfortunately, the earlier started pipeline is disturbed by a new started pipeline. As a result, the build is flawed at the end ...

EDIT new config file:

before_script:
  - echo %CI_BUILD_REF%
  - echo %CI_PROJECT_DIR%
  - xcopy /y /s "C:/Bauen" "%CI_PROJECT_DIR%"

stages:
  - createPBLs
  - build
  - package


create PBLs:
  stage: createPBLs
  script: 
    - ./run_orcascript.cmd
  only:
  - tags
  - master


build:
  stage: build
  script:
  - ./run_pbc.cmd
  only:
  - tags
  except:
  - master


build_master:
  stage: build
  script:
  - ./run_pbcm.cmd



  only:
  - master

package:
  stage: package
  script:
  - ./cpfiles.cmd
  artifacts:
    expire_in: 1 week
    name: "%CI_COMMIT_REF_NAME%"
    paths:
      - GitLab-Build
  only:
  - tags
  - master
Hendouz
  • 491
  • 1
  • 7
  • 17
  • Can't be done natively. The last (hopefully) relevant open issue on gitlab.com: https://gitlab.com/gitlab-org/gitlab/-/issues/202186 Two workarounds are mentioned: https://gitlab.com/Istador/gitlab-ci-orchestrator and https://pypi.org/project/gitlab-job-guard/ – vctls May 12 '20 at 17:05

2 Answers2

5

Currently there is no way for this, and there is an open issue at the moment on GitLab.

What you can do instead is to add limit = 1 in your gitlab-runner config.toml file, which would enforce the gitlab-runner to only accept one job at a time.

I see that you are not passing artifacts between your stages, but if your build stage, depended on anything in the createPBLs stage, you can use a combination ofartifacts and dependencies to pass data between stages.


For example:

before_script:
  - echo %CI_COMMIT_SHA%
  - echo %CI_PROJECT_DIR%

stages:
  - createPBLs
  - build
  - package


create PBLs:
  stage: createPBLs
  script: 
    - md "C:\HierBauen\%CI_COMMIT_SHA%\"
    - xcopy /y /s "C:/Bauen" "C:/HierBauen/%CI_COMMIT_SHA%"
    - xcopy /y /s "%CI_PROJECT_DIR%" "C:\HierBauen\%CI_COMMIT_SHA%"
    - cd "C:\HierBauen\%CI_COMMIT_SHA%"
    - ./run_orcascript.cmd
  artifacts:
    name: createPBLS_%CI_COMMIT_SHA%
    untracked: true
    expire_in: 1 day
  only:
  - tags
  - master

build:
  stage: build
  script:
  - cd "C:\HierBauen\%CI_COMMIT_SHA%"
  - ./run_pbc.cmd
  dependencies:
  - createPBLs
  artifacts:
    name: build_%CI_COMMIT_SHA%
    untracked: true
    expire_in: 1 day
  only:
  - tags
  except:
  - master

build_master:
  stage: build
  script:
  - cd "C:\HierBauen\%CI_COMMIT_SHA%"
  - ./run_pbcm.cmd
  dependencies:
  - createPBLs
  artifacts:
    name: build_%CI_COMMIT_SHA%
    untracked: true
    expire_in: 1 day
  only:
  - master

package:
  stage: package
  script:
  - cd "C:\HierBauen\%CI_COMMIT_SHA%"
  - ./cpfiles.cmd
  dependencies:
  - build_master
  artifacts:
    expire_in: 1 week
    paths:
      - GitLab-Build
    name: "%CI_COMMIT_REF_NAME%"
  only:
  - tags
  - master
Creak
  • 4,021
  • 3
  • 20
  • 25
Rekovni
  • 6,319
  • 3
  • 39
  • 62
  • 2
    this is bad. How could I rebuild the pipeline so that two different pipelines can not get in each other's way? Artifacts can only be tapped from the runner in workspace, correct? – Hendouz Mar 28 '18 at 14:29
  • @Hendouz Yes, I've updated the answer to give an example – Rekovni Mar 28 '18 at 14:58
  • Be aware, that the artifacts will be uploaded to the GitLab server, so this could take some time - edit what gets uploaded so only the essential components are moved between stages. – Rekovni Mar 28 '18 at 15:02
  • Thanks for your tip. However, I have the following problem: As you can see, my project is not built in the workspace of the runner, but in another directory. There are also the build scripts (eg run_orcascript.cmd, run_pbc.cmd, /run_pbcm.cmd). You say above: untracked: true. So these files are also tracked? Can gitlab-ci / runner ever access untracked: true in a directory other than its own, where it pulls the repository? @Rekovni – Hendouz Mar 29 '18 at 10:14
  • @Hendouz Ah, you may have issues with using GitLab runners, as ideally the builds would all be contained in its own build directory if that makes sense. If it is possible, you could do a `before_script` and copy across all the files you need to run the build? (so it's not reliant on those outside the build). Otherwise, you will always hit this issue of the two pipelines hitting each other trying to make the same changes as you've found. – Rekovni Mar 29 '18 at 10:17
  • I have a problem: I configured it to pull the batches into workspace via before_script. However, after the create_PBL is through and he is in the next job (eg build_master), the workspace reloads and deletes the files created by create_PBL? the script /run_pbcm.cmd is dependent on the step of create_PBL. I edit the current config again purely. – Hendouz Mar 29 '18 at 12:07
  • Can one adjust it so that he pulls only once per pipeline? (eg in the first stage only once) @Rekovni – Hendouz Mar 29 '18 at 12:42
  • No; but you should now implement what I edited in yesterday with the artifacts and dependencies, so that any changes to the workspace is not affected – Rekovni Mar 29 '18 at 12:45
  • the problem is, if I write "untracked: true" then the batches are also given or not? how can I avoid that – Hendouz Apr 03 '18 at 08:33
  • If you know what files you want moved between stages, you can use wildcards to select the files you want. – Rekovni Apr 03 '18 at 10:10
  • Many Thanks. I have now solved this easier: from 3 stages I have now just made a stage: Build. That's it. @Rekovni – Hendouz Apr 03 '18 at 10:29
  • This is simply a total mess. All issues in regards to this problem are years old with hundreds of upvotes and no outcome at all. – Ini May 11 '23 at 17:55
2

Use the resource_group feature, which provides a way to group jobs which need the same mutex wrapped around them. Out of the box, resource_groups don't provide this pipeline-level mutexing (concurrency prevention), however, using the process mode option, specifically setting it to "oldest first", does. The docs further state that:

To change the process mode of a resource group, you must use the API and send a request to edit an existing resource group by specifying the process_mode:

unordered
oldest_first
newest_first

Also mentioned here: https://stackoverflow.com/a/74286510/532621

timblaktu
  • 375
  • 3
  • 11