How can I set the maximum number of CPUs each job can ask for in Slurm?
We're running a GPU cluster and want a sensible number of CPUs to be always available for GPU jobs. This is kind of fine as long as the job asks for GPUs because there's GPU <-> CPU mapping in the gres.conf
. But this doesn't stop a job that doesn't ask for any GPUs not to acquire all CPUs in the system.