1

The script detect.py performs some analysis:

#!/usr/bin/python
[...]
for i in range(X, Y):
[...]

The idea is to run this python script inside some folders. This variable X and Y changes according to the folder we are in.

This execution is controlled by the following prepare.sh script:

#!/usr/bin/bash
# Each folder has this name:
# 0.001-0.501
# 0.002-0.502
# 0.003-0.503
# ... and so on, up to:
# 8.500-9.000
# So we create the folders array:

lower_limit_f=($(seq 0.001 0.001 8.5))
upper_limit_f=($(seq 0.501 0.001 9))

FOLDERS=( )
for i in ${!lower_limit_f[*]}  ; do
    FOLDERS+=("${lower_limit_f[$i]}-${upper_limit_f[$i]}")
done

# Now we create two more arrays:
# lower_limit, which contains all possible values of `X`
# and upper_limit, which contains all possible values of `Y`

lower_limit=($(seq 1 1 8500))
upper_limit=($(seq 501 1 9000))

# Now I loop over all `FOLDERS`:
for i in ${!FOLDERS[*]}  ; do

    # I copy the python script to each folder
    cp detect.py ./${FOLDERS[$i]}

    cd ./${FOLDERS[$i]}

    # I make the substitution of `X` and `Y`, accordingly:                                                                                                       
    sed -i "s/0, len(traj)/${lower_limit[$i]}  , ${upper_limit[$i]}/g" detect.py

     # we execute:
    python detect.py
    cd -

done

The problem with this is that there are 8500 folders, and this is performed sequentially.

I would like to submit these jobs into slurm, in the following way:

  • Allocation of 1 node (40 cores)
  • 40 detect.py to be working individually on 40 folders.
  • If detect.py has finished in a given folder, it leaves 1 core available to be used by the next folder.

This would be the following run.sh sbatch script to be submitted to the slurm queue as sbatch run.sh:

#!/bin/sh

#SBATCH --job-name=detect                
#SBATCH -N 1
#SBATCH --partition=xeon40
#SBATCH -n 40
#SBATCH --time=10:00:00

...

How could this be sent within this run.sh script ?

DavidC.
  • 669
  • 8
  • 26

0 Answers0