0

I have three slurm scripts: 1.slurm:

#!/bin/bash
#SBATCH --job-name=first
#SBATCH --partition=cuda.q

sbatch 2.slurm

2.slurm:

#!/bin/bash
#SBATCH --job-name=second
#SBATCH --partition=cuda.q

sbatch 3.slurm

3.slurm

#!/bin/bash
#SBATCH --job-name=third
#SBATCH --partition=cuda.q

echo "a"

Only the 1.slurm job is submitted and in the output file I get the error: sbatch: error: Batch job submission failed: Access/permission denied

  • This works fine with Slurm on the HPC systems I use but it depends on how Slurm has been setup and configured on the HPC system. I suggest you contact the support team for the HPC service you are using with this question. – AndyT Jan 18 '23 at 10:34
  • Can the `2.slurm` job be submitted successfully on its own? (I.e. not inside job 1) – damienfrancois Jan 18 '23 at 10:40

1 Answers1

0

Your cluster isn't configured to allow job submission from a compute host. One workaround that might work is to SSH back to the head node and submit the job from there.

File 1.slurm might look like:

#!/bin/bash
#SBATCH --job-name=first
#SBATCH --partition=cuda.q

ssh ${USER}@LOGIN_NODE sbatch 2.slurm