0

I have to run multiple simulations on a cluster using sbatch. In one folder I have the Python script to be run and a file to be used with sbatch:

#!/bin/bash -l
#SBATCH --time=04:00:00
#SBATCH --nodes=32
#SBATCH --ntasks-per-core=1
#SBATCH --ntasks-per-node=36
#SBATCH --cpus-per-task=1
#SBATCH --partition=normal
#SBATCH --constraint=mc

module load Python

source /scratch/.../env/bin/activate

srun python3 script.py

deactivate

What I have to do is to run the same Python script but using different values for --nodes. How can I do that? Moreover, I would like to create one folder for each run where the slurm file will be saved (output), named something like "nodes_xy".

wrong_path
  • 376
  • 1
  • 6
  • 18

1 Answers1

3

Assuming your script is named submit.sh, you can remove the --nodes from the script and run:

for i in 2 4 8 16 32 64; do sbatch --nodes $i --output nodes_$i.txt, submit.sh; done

This will submit the submit.sh script with two additional parameters, --nodes and --output, the first one to control the number of nodes used, and the second to specify the name of the output file, for each value 2, 4, 8, etc. Note that all the output files will be in the current directory, you will need to develop the one-liner a bit if you really need them in separate directories.

If the maximum allowable run time allows for it, you can perform all the runs in a single job with something like this:

#!/bin/bash -l
#SBATCH --time=04:00:00
#SBATCH --nodes=32
#SBATCH --ntasks-per-core=1
#SBATCH --ntasks-per-node=36
#SBATCH --cpus-per-task=1
#SBATCH --partition=normal
#SBATCH --constraint=mc

module load Python

source /scratch/.../env/bin/activate

for i in  2 4 8 16 32 64;
do
srun --nodes $i python3 script.py > nodes_$i
done

deactivate
damienfrancois
  • 52,978
  • 9
  • 96
  • 110