Questions tagged [sbatch]

sbatch submits a batch script to SLURM (Simple Linux Utility for Resource Management). The batch script may be given to sbatch through a file name on the command line, or if no file name is specified, sbatch will read in a script from standard input. The batch script may contain options preceded with "#SBATCH" before any executable commands in the script.

sbatch submits a batch script to SLURM (Simple Linux Utility for Resource Management). The batch script may be given to sbatch through a file name on the command line, or if no file name is specified, sbatch will read in a script from standard input. The batch script may contain options preceded with "#SBATCH" before any executable commands in the script.

231 questions
1
vote
1 answer

Slurm sbatch job fail

I am writing a script test.job to submit job using sbatch. The script is as below. #!/bin/bash #SBATCH -J test #SBATCH --time=00:01:00 #SBATCH -N 2 #SBATCH -n 2 #SBATCH -o logs/%j.sleep #SBATCH -e logs/%j.sleep echo test Then i run with…
Isaac
  • 35
  • 4
1
vote
0 answers

How to run python scripts sequentially in slurm submission?

I have a simple bash script that I use to submit multiple python scripts to the queue - they need to be completed sequentially. This used to work with qsub, but now that I am running the job in a cluster with slurm, the python scripts run…
1
vote
1 answer

Sbatch submission of snakemake jobs

I have a Snakefile that runs a python script which outputs many files in a directory. I wrote the following Snakefile script to execute this MODELS = ["A", "B", "C"] SEEDS = [1, 2, 3] rule all: input: …
milkyway42
  • 144
  • 1
  • 6
1
vote
1 answer

Snakemake and sbatch

I have a Snakefile that has a rule which sends 7 different shell commands. I want to run each of these shell commands in sbatch and I want them to run in different slurm nodes. Right now when I include sbatch inside the shell command in the…
milkyway42
  • 144
  • 1
  • 6
1
vote
0 answers

Use memory from 2 Nodes for a Job - SLURM

I have a MPI job which needs a memory greater than the maximum memory available on 1 node. I am trying to now start the job on 2 nodes, which would use the memory from 2 nodes, at least that is the idea. The script below blocks CPUS from 2 nodes,…
JHN_28
  • 11
  • 1
1
vote
3 answers

Split a large gz file into smaller ones filtering and distributing content

I have a gzip file of size 81G which I unzip and size of uncompressed file is 254G. I want to implement a bash script which takes the gzip file and splits it on the basis of the first column. The first column has values range between 1-10. I want to…
John
  • 815
  • 11
  • 31
1
vote
1 answer

Ways to pass $USER (or it's equivalent) to --chdir in Slurm

The Problem I am attempting to create a script which automagically sets the directory in which to run an sbatch command with --chdir using a variable. My goal is to create a single template file that is easy for less experienced users to run my…
WesH
  • 460
  • 5
  • 15
1
vote
0 answers

using parallel slurm jobs in combination with mpirun with multiple nodes

I am asking for a solution of an issue I do not get behind. I am using a slurm cluster and have a python script 'SOLVER.py', which uses mpirun commands in its code (calls a numerical spectral-element simulation). Each node on the cluster consists of…
Max T
  • 11
  • 1
1
vote
1 answer

Values in a sbatch --array command different from the job task IDs, each of them run in a different job

Is it possible to feed a script some values in the sbatch --array command line different from the job task IDs, and each of them be run in a different job? sbatch --array=1-2 script.sh 4 6 with script.sh…
Miguel
  • 356
  • 1
  • 15
1
vote
1 answer

sbatch script with number of CPUs different to total number of CPUS in cores?

I'm used to start an sbatch script in a cluster where the nodes have 32 CPUs and where my code needs a power of 2 number of processors. For exemple i do this: #SBATCH -N 1 #SBATCH -n 16 #SBATCH --ntasks-per-node=16 or #SBATCH -N 2 #SBATCH -n…
Gundro
  • 139
  • 9
1
vote
1 answer

How to use User Inputs with Slurm

I'm looking for using inputs in the shell prompt by way of SLURM . For example, when I use a simple bash script like : #!/bin/bash echo What\'s the path of the files? read mypath echo What\'s the name you want to give to your archive with the…
Paillou
  • 779
  • 7
  • 16
1
vote
1 answer

How can I verify my google account to use TensorBoard.dev during sbatch?

I want to run a tenosorboard.dev using the following bash file. #!/bin/bash #SBATCH -c 1 #SBATCH -N 1 #SBATCH -t 50:00:00 #SBATCH -p medium #SBATCH --mem=4G #SBATCH -o hostname_tensorboard_%j.out #SBATCH -e hostname_tensorboard_%j.err module load…
1
vote
2 answers

gpucompute* is down* in slurm cluster

There is a down state on my gpucompute nodes and cant send the jobs on GPU nodes. I couldn't return my 'down GPU' nodes after following all the solutions on the net. Before this problem, I had an error with the Nvidia driver configuration in a way…
Charlt
  • 17
  • 9
1
vote
1 answer

slurm job name in a for loop

I would like to have my job name function of the parameters of the loop. #!/bin/bash #SBATCH -n 4 #SBATCH -p batch576 MAXLEVEL=8 process_id=$! for Oh in '0.0001' '0.0005' do for H in '1.' '0.8' do mkdir half$Oh$H cp half…
Suntory
  • 305
  • 2
  • 15
1
vote
0 answers

SLURM Array jobs - how to run as many job as possible? How to combine Slurm options most sensibly?

I am quite new to Slurm and this community, so plese correct me in any way if I am doing anything wrong! :) I need to run my executable (a Python script) many times in parallel on a HPC Cluster. This executable takes the Slurm Array task ID as…
cheshire
  • 53
  • 4