I have a question with regards to resources and threads (it's not clear to me from the documentation). Are the resources per thread ?
That's the case with various HPC job submission systems. E.g.: that's for example how jobs work on LSF's bsub:
If I request 64 threads, with 1024MiB each, bsub will schedule a job with 64 process, each consuming 1024MiB individually, and thus consuming 64GiB in total.
(That total memory may or may not be on the same machine, as the 64 processes may or may not be on the same machine depending on the
host[span=n]
parameters. For openMPI uses it might well be 64 different machines each allocating it's own local 1024MiB chunk. But withhost[span=1]
, it's going to be a single machine with 64 threads and 64GiB memory).
When looking at the LSF profile, mem_mb
seems to passed with only unit versions but otherwise the same value from ressources to bsub
thus it seems that snakemake and LSF both assume that total_memory = threads * mem_mb.
I just wanted to make sure this assumption is correct.
Upon further analysis, the resources accounting in jobs.py is in contactiction of the above.
Filing a bug request