I am using the free shared runners on the gitlab.com environment. I have a gitlab-CI pipeline that runs the following lftp commands one after the other:
lftp -c "set ftp:ssl-allow no; open -u $USERNAME,$PASSWORD $HOST; glob -a rm -r ./httpdocs/*"
lftp -c "set ftp:ssl-allow no; open -u $USERNAME,$PASSWORD $HOST; mirror -R public/ httpdocs --ignore-time --parallel=50 --exclude-glob .git* --exclude .git/"
The purpose of these commands is to delete the content of the httpdocs folder (previous files) and then upload new build artifact.
The CI pipeline is triggered from a CMS. It sometimes happen that the content editors update the content in parallel, resulting in a lot of triggers that run in parallel (the pipeline takes about 3 minutes to finish).
The pipeline will then start failing with the following error:
rm: Access failed: 550 /httpdocs/build-html-styles.css: No such file or directory
This happens because a file deleted by another pipeline is queued for deletion. A very similar error happens when the httpdocs folder is completely empty. This results in my whole pipeline failing (the second upload lftp command does not get executed at all).
Examples of failing pipelines and their output:
How do I prevent this from happening? Using lftp to upload the artifact is not a must - I am running the node:8.10.0
docker image. Gitlab-ci.yml file in question.