I have to run a series of python scripts for about 10'000 objects. Each object is characterised by arguments in a row of my catalogue. On my computer, to test the scripts, I was simply using a bash file like:
totrow=`wc -l < catalogue.txt`
for (( i =1; i <= ${totrow}; i++ )); do
arg1=$(awk 'NR=='${i}' ' catalogue.txt)
arg2=$(awk 'NR=='${i}'' catalogue.txt)
arg3=$(awk 'NR=='${i}'' catalogue.txt)
python3 script1.py ${arg1} ${arg2} ${arg3}
done
that runs the script for each row of the catalogue. Now I want to run everything on a supercomputer (with a slurm system). What I would like to do, it is running e.g. 20 objects on 20 cpus at the same time (so 20 rows at the same time) and go on in this way for the entire catalogue.
Any suggestions? Thanks!