Questions tagged [sbatch]

sbatch submits a batch script to SLURM (Simple Linux Utility for Resource Management). The batch script may be given to sbatch through a file name on the command line, or if no file name is specified, sbatch will read in a script from standard input. The batch script may contain options preceded with "#SBATCH" before any executable commands in the script.

sbatch submits a batch script to SLURM (Simple Linux Utility for Resource Management). The batch script may be given to sbatch through a file name on the command line, or if no file name is specified, sbatch will read in a script from standard input. The batch script may contain options preceded with "#SBATCH" before any executable commands in the script.

231 questions
1
vote
1 answer

Capture a sbatch file output

I want to be able to run a command inside the bash file and save it in somefile.txt I am running my script the following way: sbatch file.sh and inside this file I have a terminal command
itsmrbeltre
  • 412
  • 7
  • 18
1
vote
1 answer

Why does python not import my library even though it says its present filesystem when I use sbatch with SLURM?

I was trying to use a simple script that imported the library namespaces when using SLURM and sbatch however, I am not able to do it because it doesn't find the library (even though pip list shows its installed in my environment). The script I am…
Charlie Parker
  • 5,884
  • 57
  • 198
  • 323
1
vote
1 answer

/usr/bin/modulecmd: No such file or directory

I'm using sbatch to submit my job. Command line mpirun --version gives: Intel(R) MPI library for Linux* OS, Version 5.0 Build 20140507 Copyright (C) 2003-2014, Intel Corporation. All rights reserved. So I think I'm working with Intel…
dudu
  • 801
  • 1
  • 10
  • 32
1
vote
1 answer

Running Batch Job on Slurm Cluster

So I have spent a few hours now trying to figure this out and would appreciate any help. What I am trying to do is run a batch job with a slurm --array0-654 I would like each job step to run 8 threads. I have access to 11 nodes on the cluster each…
patrick9382
  • 97
  • 1
  • 11
1
vote
0 answers

Program strongly scaling on 1 node, large increase in runtime using 2 nodes

The results show that as I increase the number of processors from 2 to 4 to 10 the runtime decreases each time, but when I get to 20 processors there is a large increase in runtime. Each node has two 8-core processors, so I want to limit each node…
1
vote
1 answer

slurm seems to be launching more tasks than requested

I'm having trouble getting my head around the way jobs are launched by SLURM from an sbatch script. It seems like SLURM is ignoring the --ntasks argument and launching all the srun tasks in my batch file immediately. Here is an example, using a…
Tom Harrop
  • 678
  • 2
  • 7
  • 23
0
votes
1 answer

Script to send email not working when run on SLURM

I have a bash script that performs various weekly data collection tasks and generates a report which is then echoed into an email to be sent. I have ran the script manually in the Linux terminal and have confirmed I can receive emails from it. The…
Wing
  • 1
  • 1
0
votes
0 answers

use asyncio.create_subprocess_shell run sbatch not work well

A content like this Class Tcontent: def __init__(self, op_file): self.op_file = op_file async def __aenter__(self): sbatch_cmd = f"sbatch -p test -q test -c 1 --mem 1000 -o {self.op_file} myscript.sh" proc =…
YANG ZHOU
  • 9
  • 2
0
votes
1 answer

Unable to use sbcast to copy over files to compute nodes from master

I have a cluster of 6 compute nodes and 1 master node for academic research purposes. I am trying to test my cluster and make sure that they can complete an assortment of sbatch jobs submmited. I want to use the sbcast command to copy over a file…
0
votes
1 answer

slurm - independent tasks run slowly when all in one job

I'm calculating power spectra (Fourier transforms) of astronomical time series data. I have ~6,000 time series data sets, which naturally parallelizes. For my data I already calculated power spectra for all data sets up to a certain frequency. I now…
Henry
  • 1
  • 1
0
votes
1 answer

Using all cores for 2 nodes in a HPC

I am trying to run a R code in a HPC environment. The HPC system has 8 nodes with 20 cores each. I wish to use 2 nodes, utilizing 40 cores in total. I am submitting the job through SLURM and it is running the .R file which has a parallel computing…
0
votes
0 answers

sbatch with ntasks and array jobs are not working with me

I now have this script #!/bin/bash #SBATCH -t 12:00:00 #SBATCH -N 1 #SBATCH --tasks-per-node 81 #SBATCH -p partition #SBATCH -A user #SBATCH -a 0-3 module load gcc/9.3.0 module load intel/2021.2 module load impi/2021.2 module load cp2k/9.1 source…
loom
  • 1
  • 3
0
votes
1 answer

Allocate a set of job arrays in a previously defined number of nodes

I am trying to execute a process using 6000 samples as input, so I am trying to use job arrays in order to do so. My main problem is that those jobs allocate in every node my machine has available, and I would like to restrain the number of nodes…
0
votes
0 answers

How to allocate multiple node types in SLURM job

I want to allocate 'x' nodes of Node type A and 'y' nodes of Node type 'B' for a single SLURM job. How do we do it using Salloc command? For single task we do like this salloc -N 1 -C --ntasks-per-node=1 --exclusive srun <..> but for…
Rishank
  • 1
  • 1
0
votes
0 answers

Question regarding pending jobs (reason: resource) on slurm

I recently started working with slurm and came up with question regarding submitting a job. I have submitted sbatch file via sbatch myfile.sbatch command but the job doesn't start running where it keeps showing "pending, reason: resources" even…