I have a Fortran code that I have to run on a cluster with slurm. I have compiled the code in the home directory (which is mounted in all the cluster nodes) and always ran on it. However, the partition where the home is mounted has only 250 GB or so. I have to run many different simulations that generate many output files and so they get easily heavy and me and my colleagues are always facing memory issues (we have to stop simulations, move the files manually and restart them). We move them in a secondary disk with 5 TB memory.
I was wondering if there is a way to run the simulations with sbatch on the home directory and save all the output files in the secondary disk (which is not shared between all the nodes). I tried with the --output flag but it is not working.
The bash script I run with sbatch is simple and it is the following:
#!/bin/bash
#SBATCH --partition=cpu
#SBATCH --job-name=k1_01
#SBATCH --mem=16G
#SBATCH --time=90-0:0
#SBATCH --output=output.log
#SBATCH --nodelist=node13
./program < input.in
FYI program generates many output files: some are updated every iteration of the main loop inside the code and some others are generated new, one for each step (I have 2000 steps).
Thanks for your help