Questions tagged [sbatch]

sbatch submits a batch script to SLURM (Simple Linux Utility for Resource Management). The batch script may be given to sbatch through a file name on the command line, or if no file name is specified, sbatch will read in a script from standard input. The batch script may contain options preceded with "#SBATCH" before any executable commands in the script.

sbatch submits a batch script to SLURM (Simple Linux Utility for Resource Management). The batch script may be given to sbatch through a file name on the command line, or if no file name is specified, sbatch will read in a script from standard input. The batch script may contain options preceded with "#SBATCH" before any executable commands in the script.

231 questions
0
votes
1 answer

sbatch duplicating tasks across nodes instead of spreading tasks across nodes in SLURM

I have a program that takes an input file describing a range of initial conditions and outputs a range of final conditions. I also have a batch script that "parallelizes" the program by just breaking up the initial condition ranges in the input file…
Beezum
  • 341
  • 1
  • 4
  • 12
0
votes
1 answer

How to make a directory of current time as a part of SLURM's log path

I hava a .slurm file which can be run in Linux GPU Cluster. The file is like: #!/bin/bash #SBATCH -o ./myrepo/output.log #SBATCH -J jobname #SBATCH --gres=gpu:V100:1 #SBATCH -c 5 source /home/LAB/anaconda3/etc/profile.d/conda.sh conda activate…
0
votes
0 answers

Sbatch job array with dependency

I am running an array of a job in which I want the job (running the script job.sh) to have a dependency where it can't go to the scheduler until the previous array job has started and I can't seem to figure out how to code this #!/bin/bash #SBATCH…
0
votes
2 answers

Basic Slurm questions

I have been using a cluster to do some heavy computing. There are a few things I do not understand. For instance, I have used this configuration for all my job so far #SBATCH -J ss #SBATCH -N 1 # allocate 1 nodes for the job #SBATCH…
Terrence
  • 11
  • 2
0
votes
0 answers

How to automate 'get' command from a remote server through 'sbatch' slurm command?

I am getting a bunch of TBs of data files from a remote read-only tape archive into my work directory on my working Linux cluster. Unfortunately I am not allowed to install expect on the computer I am working with. I would like to write a command by…
Behnam
  • 501
  • 1
  • 5
  • 21
0
votes
2 answers

submit sbatch jobs in loop

I have a txt file (say jobs.txt) that has several lines like: "sbatch -w node00x script.sh 1" "sbatch -w node00z script.sh 10" . . etc I wonder if it is possible to create an executable bash file like the following #!/bin/bash while read -r…
Ghosh
  • 151
  • 7
0
votes
3 answers

Use of a HEREDOC with SLURM sbatch --wrap

I am encountering difficulties using a (Bash) HEREDOC with a SLURM sbatch submission, via --wrap. I would like the following to work: SBATCH_PARAMS=('--nodes=1' '--time=24:00:00' '--mem=64000' '--mail-type=ALL') sbatch ${SBATCH_PARAMS[@]}…
Coby Viner
  • 184
  • 5
  • 16
0
votes
1 answer

Execute batch jobs sequentially

I have a batch job which looks like this sbatch --wrap "perl test.pl file1 file2" sbatch --wrap "perl test.pl file3 file4" sbatch --wrap "perl test.pl file5 file6" sbatch --wrap "perl test.pl file7 file8" & the list goes on till sbatch --wrap "perl…
JTh
  • 13
  • 2
0
votes
1 answer

How to tell slurm to cache config file for multiple submissions

I have three scripts: config.yaml and script.py and slurm.sh I am submitting job to the job scheduler in Slurm using the slurm.sh file which calls the script.py file and the script.py files loads the config from the config.yaml file. It would look…
palimboa
  • 171
  • 10
0
votes
1 answer

R script runs on external cluster without performance improvement

I'm using a topic modeling approach that works well on my computer in RStudio, except that it takes ages. So I'm using a linux cluster. However, also I seem to request a lot of capacity, it doesn't really speed up: I'm sorry I'm a greenhorn... So…
0
votes
1 answer

Handling SLURM .out output

I am using sbatch to run scripts, and I want the output text to be written in a file from a certain point, i.e. I want to echo some text so the user can see, but after a certain command I want all output to be written in a file. Is there a way to do…
Joshhh
  • 425
  • 1
  • 4
  • 18
0
votes
2 answers

SLURM: Save job script

In SLURM, I can easily specify the files for logging in my job script: #SBATCH --output=logs/output-%j #SBATCH --error=logs/error-%j Now, I use a jobscript that is generated programmatically. Whenever I submit a job, I'd like to save that jobscript…
Michael
  • 7,407
  • 8
  • 41
  • 84
0
votes
1 answer

How to submit jobs to SLURM with different nodes?

I have to run multiple simulations on a cluster using sbatch. In one folder I have the Python script to be run and a file to be used with sbatch: #!/bin/bash -l #SBATCH --time=04:00:00 #SBATCH --nodes=32 #SBATCH --ntasks-per-core=1 #SBATCH…
wrong_path
  • 376
  • 1
  • 6
  • 18
0
votes
1 answer

Slurm: Is it possible to give or change pid of the submitted job via sbatch

When we submit a job via sbatch, the pid to jobs given by incremental order. This order start from again from 1 based on my observation. sbatch -N1 run.sh Submitted batch job 20 //Goal is to change submitted batch job's id, if possible. [Q1] For…
alper
  • 2,919
  • 9
  • 53
  • 102
0
votes
1 answer

Suppress all warnings in MATLAB on server

I am running a batched analysis on a server (using SBATCH slurm) using glmfit() 106 times and it keeps outputting warning files that don't appear when I run on local. I get multiple types of warning types Warning: Iteration limit reached Warning:…
Brendan Frick
  • 1,047
  • 6
  • 19
1 2 3
15
16