I am new to HPC and SLURM especially, and i ran into some troubles.
I was provided with acces to a HPC cluster with 32 CPUs on each node. In order to do the needed calculations I made 12 Python multiprocessing Scripts, where each Script uses excactly 32 CPU's. How, instead of starting each Script manually in the interactive modus ( which is also an option btw. but it takes a lot of time) I decided to write a Batch Script in order to start all my 12 Scripts automatically.
//SCRIPT//
#!/bin/bash
#SBATCH --job-name=job_name
#SBATCH --partition=partition
#SBATCH --nodes=1
#SBATCH --time=47:59:59
#SBATCH --export=NONE
#SBATCH --array=1-12
module switch env env/system-gcc module load python/3.8.5
source /home/user/env/bin/activate
python3.8 $HOME/Script_directory/Script$SLURM_ARRAY_TASK_ID.py
exit
//UNSCRIPT//
But as far as i understand, this script would start all of the Jobs from the Array on the same node and thus the underlying python scripts might start a "fight" for the available CPU's and thus slow down.
How should i modify my bash file in Order to start each task from the array on a separate node?
Thanks in advance!