I wish to set a maximum of cpus used by a snakemake pipeline on a SLURM cluster, so the number of jobs sent in parallel (--jobs) is defined by max CPUs limited rather than by --jobs param.
Snakemake version 7.20.0
So I though that
snakemake--local-cores 240 --jobs 20 --cluster-config cluster.yaml --latency-wait 60 --cluster 'sbatch -t {cluster.time} --mem={cluster.mem} -c {cluster.cpus} -o {cluster.output} -e {cluster.error}'
would schedule max 20 jobs in parallel (right? because "Use at most N CPU cluster/cloud jobs in parallel" is a bit unclear, but I figured out it means 'jobs' in my case) With at most 240 cpus used on the host machine at the same time,
So if I have 20 jobs with 40 threads (or CPUs) each, I expected only 6 jobs to be scheduled in the same time, because 6x40=240, and I set --local-cores 240 so "use at most 240 cores of the host machine in parallel.
But still 20 jobs are scheduled on 40 cpus each, so 800 cpus are booked while I expected to use max 240 CPUs.
Do I need to set --jobs unlimited so the number of jobs send in parallel is defined by --local-cores ?