3

Disclaimer: this is a cross-post from https://forum.gitlab.com/t/cancel-pipeline-from-job-or-how-to-flush-output-buffer/67008

Not for the first time I am looking for ways to cancel a pipeline from within a job.

True, I can have a job with a script that returns a non-zero exit code like so.

my-job:
  script:
    - |
      echo something
      if whatever; then
        echo else
        exit 42
      fi

This will fail the job and thus also the pipeline. It’ll be marked as failed rather than cancelled.

So, I tried to be clever and cancel the pipeline through the API like so:

my-job:
  script:
    - |
      echo something
      if whatever; then
        echo else
        curl --request POST --header "PRIVATE-TOKEN: $MY_TOKEN" "$CI_API_V4_URL/projects/$CI_PROJECT_ID/pipelines/$CI_PIPELINE_ID/cancel"
      fi

This works just fine but…the pipeline gets cancelled so quickly that not even the output buffer is flushed. Hence, I loose all echo ... output.

So, is there a better way to cancel the pipeline on dynamic conditions? If not, how could I ensure the stdout buffer is flushed before I cancel it?

Marcel Stör
  • 22,695
  • 19
  • 92
  • 198

1 Answers1

2

I guess when a pipeline is cancelled, the job logs logic is immediately cancelled too, so there is no handling for the job logs from this moment on.

However, you can separate the logic to two different jobs and use artifacts to pass the decision:

my job:
  artifacts:
    paths: [.cancel]
  script: |
    echo something
    if whatever; then
      echo need to cancel
      echo abc > .cancel
    fi

cancel job:
  script: |
    if [ -f .cancel ]; then
      curl ....
    fi

This is for the answer. But I wonder, wouldn't rules be a better choice for your use case? do you have an advantage in cancelling rather than controlling the execution by rules?

  • 1
    "job logs logic is immediately cancelled too" - exactly, I tried half a dozen things to help the stdout buffer flush _before_ I cancel the pipeline but all in vain. I also tried moving the cancel to `after_script` - didn't help. Thanks for the idea with using a separate job; didn't think of this. Downside: this "synthetic" job is always part of the pipeline even if it does not get cancelled. I wanted to use `workflow:rules` initially but they are quite limited in what they can evaluate (basically just env variables). I need to `grep` a file in the repo to decide whether to cancel or not. – Marcel Stör Mar 20 '22 at 09:47