I've been working on such a solution for days now. There are two core-concepts in turborepo to achieve this:
- Filtering workspaces
- Caching Buildoutputs and store the cache in the cloud (not what you're looking for)
So, you can filter
your monorepo for a specific project, e.g:
pnpm turbo run build --filter='my-project...[HEAD^1]' --dry=json
-> This will look if the task build
is needed to run for the project "my-project", comparing the current source with "HEAD^1". The option dry=json helps to just look if there would be a need to run "build" or not for "my-project".
You could filter a whole lot more, check the docs.
Now, what i have built on top of this:
A new job on the github workflow looks with the help of this filter command if a deployment of my graphql-server is needed and he will set the output of this decision as an artifact, to provide this information for later jobs (https://github.com/actions/upload-artifact)
My actual docker-build and deploy-to-fly-io jobs that run afterwards, will download this artifact and set a CONTINUE environment variable, depending if it should build + deploy or not.
Every job coming after that is having an if: ${{ env.CONTINUE == 'true' }}
to skip them if no build/deploy is needed.
It could be much simpler if you can run your build/deploy job directly with the turbo
cli, because then you can just combine your filter and the execution of the build - but that was not possible in my case.
If you need to "skip" jobs that are coming later in your workflow, it's harder thant it should, as github is not supporting "abortion" of jobs.
For all the other commands like lint
, typecheck
and test
-> just add an appropriate filter option to them and you will achieve that they only run on your "affected" workspaces/projects in your PR.
Ressources: