0

A Virtual Machine that we use as a self host agent runned out of space on it's data disk resulting in failure to start agent services and running pipelines. The result was the agent in an idle state, not communicating to Azure DevOps Pipeline service. Extending Disk size of VM agent would help resolve this issue but would increase costs, instead a temporary solution was to delete older build logs from the _diag folder.

Looking inside the VM agent directory we can see the following folders: enter image description here

Is there any tool that Azure provides to deal with this kind of problem?

Is the "_diag" folder the right one that needs the contents deleted?

Is there any kind of automation that can be used, for example in the pipelines?

Is the Maintenance job functionality of azure a solution for this problem?

Nmaster88
  • 1,405
  • 2
  • 23
  • 65

1 Answers1

0

You can use maintenance jobs to cleanup the working folders for your builds. Working folders are getting reused on each pipeline run, but if you have multiple pipelines that are not getting used often * maintenance jobs will free you space on the virtual machine.

Maintenance jobs will not remove logs from your agents. However I do not believe the _diag folder consumes much space on the agent, I suggest to focus on cleaning up artifacts and setting mj.

You can configure agent pools to periodically clean up stale working directories and repositories.

https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/pools-queues?view=azure-devops&tabs=yaml%2Cbrowser

A detailed explanation for the setup can be found on my article:
https://blog.geralexgr.com/docker/maintenance-jobs-for-build-agents-explained-azure-devops

GeralexGR
  • 2,973
  • 6
  • 24
  • 33
  • Hi @GeralexGR, thanks for your answer. You're saying the its improbable that the ```_diag``` will ever be an issue and that inside the ```_work``` directory, each file with a number is bind with a pipeline and that we should focus on deleting up the artifacts and settings inside of it? – Nmaster88 Jul 21 '22 at 13:19
  • If thats the case, then thats exactly what the maintenance job will do, i think. – Nmaster88 Jul 21 '22 at 13:20
  • @Nmaster88 correct, each file inside the _work directory is a pipeline working folder and inside there you can find files as artifacts, source code etc. When pipelines have a lot of days to run, maintenance jobs will clean up these folders for you based on the retention policy that you define – GeralexGR Jul 21 '22 at 13:32
  • from the image i shared, i only have a folder with only numbers which is ```1```. Which means i only have one pipeline using it? There are folders there with r1, r2 and r3 are those diff pipelines too or the same? – Nmaster88 Jul 21 '22 at 13:35
  • the r1, r2, r3 pipelines are for release pipelines. As a result you currently have one pipeline created (that ran once) and three release pipelines. The folders are created after the run – GeralexGR Jul 21 '22 at 13:39