Questions tagged [qsub]

Qsub is a job submission command for high performance computing jobs.

Qsub is a job submission command for high performance computing jobs. It is used by some resource managers and some schedulers, a few of which are TORQUE, PBSPro, OpenPBS, and SGE. It has many different options and is the way that a job (work request) gets queued for consideration in the cluster.

Questions with the "qsub" tag should clearly indicate which resource manager or scheduler is being used.

385 questions
3
votes
1 answer

Starting Jupyter notebook on a node of my cluster (High Performance Computation or HPC facility )

I wanted to run jupyter notebook on a node of our cluster, NOT on the login node. I could remotely run the jupyter notebook on the login node, but it would unnecessarily slow down the cluster usage. Please guide me how I can start the jupyter…
deltasata
  • 377
  • 1
  • 4
  • 21
3
votes
0 answers

How much memory does java need to start?

I am having some issues running java in an environment with memory controls. My use case is sun grid engine (SGE), but I can reproduce with ulimit. When I try to run java with a limit on memory, (-Xmx), I find that I still need to allow a much…
Evan Benn
  • 1,571
  • 2
  • 14
  • 20
3
votes
1 answer

Using Conda enviroment in SnakeMake on SGE cluster problem

Related: SnakeMake rule with Python script, conda and cluster I have been trying to set up my SnakeMake pipelines to run on SGE clusters (qsub). Using simple commands or tools that are installed directly to computational nodes, there is no…
user44697
  • 313
  • 4
  • 11
3
votes
1 answer

Printing training progress with Keras using QSUB and a bash file

I'm able to run a python script that trains a model using Keras/Tensorflow with the following bash script: #!/bin/bash #PBS -N Tarea_UNET #PBS -l nodes=1:ppn=4:gpus=1 cd $PBS_O_WORKDIR source $ANACONDA3/activate inictel_uni python U-NET.py Inside…
3
votes
1 answer

SLURM: Access walltime limit from script

Is it possible to access the walltime limit from within a SLURM script? For PBS Torque, this question has been answered here. Is there a similar environment value for SLURM?
Julian Helfferich
  • 1,200
  • 2
  • 14
  • 29
3
votes
3 answers

qsub is executing my bash script in csh despite shebang

I want to submit a bash script to my university's Sungrid computing cluster to run an executable in a loop. When I log in to the server, I'm in bash: $ echo $SHELL /bin/bash And I include a bash shebang at the top of the script that I pass to…
ApproachingDarknessFish
  • 14,133
  • 7
  • 40
  • 79
3
votes
2 answers

Setting PBS/Torque/qsub parameters in script via command line arguments

I want to be able to easily change how many nodes, ppn, etc I submit to qsub via script. That is, I want run somthing like this: qsub script.sh --name=test_job --nodes=2 --ppn=2 --arg1=2 With a script like the following: #/bin/bash #PBS -N…
AKW
  • 857
  • 8
  • 14
3
votes
2 answers

(shell script - qsub) wait for submitted job to complete before next command

My shell script involves qsub job submission and then copying the file generated by that job to some other location. How does one do that? Here is how my shell script looks like: ... qsub synplify.csh cp ./rev_1/netlist.vqm ~/sample ... Here,…
3
votes
2 answers

When using qsub to submit jobs, how can I include my locally installed python packages?

I have an account on a supercomputing cluster where I've installed some packages using e.g. "pip install --user keras". When using qsub to submit jobs to the queue, I try to make sure the system can see my local packages by setting "export…
user1634426
  • 563
  • 2
  • 5
  • 12
3
votes
1 answer

Running a job on multiple nodes of a GridEngine cluster

I have access to a 128-core cluster on which I would like to run a parallelised job. The cluster uses Sun GridEngine and my program is written to run using Parallel Python, numpy, scipy on Python 2.5.8. Running the job on a single node (4-cores)…
Chinmay Kanchi
  • 62,729
  • 22
  • 87
  • 114
3
votes
0 answers

R program got stuck in Grid Engine: quser shows it is running but there are no results

I am using a cluster Grid Engine. I first wrote a shell script called test.sh as following #!/usr/bin/bash export R="/share/apps/R/3.1.1/intel/2013.0.028" export INIT_DIR="$PWD" CHR=4 let K=`wc -l chr"$CHR"_1.bim | awk '{print $1}'` let…
Mike Brown
  • 331
  • 2
  • 12
3
votes
1 answer

Understanding the -t option in qsub

The documentation is a bit unclear on exactly what the -t option is doing on a job submission using qsub http://docs.adaptivecomputing.com/torque/4-0-2/Content/topics/commands/qsub.htm From the documentation: -t Specifies the task ids of a job…
David Parks
  • 30,789
  • 47
  • 185
  • 328
3
votes
1 answer

SGE submitted job state doesn't change from "qw"

I'm using Sun Grid Engine on ubuntu 14.04 to queue my jobs to be run on a multicore CPU. I've installed and set up SGE on my system. I created a "hello_world" dir which contains two shell scripts namely "hello_world.sh" & "hello_world_qsub.sh",…
mhr
  • 144
  • 3
  • 12
3
votes
2 answers

How to see the output of a job submitted through qsub in my terminal?

I am submitting this simple job to SGE through qsub. How can I see the output of the job which is a simple echo in my terminal. I mean I want it directly on screen not diverting the output to a logfile or something. So here is the job stored in…
user3708408
  • 59
  • 2
  • 5
3
votes
1 answer

Torque PBS: Specifying stdout file name to be the job id number

By default, output from a submitted job to a Torque queue will be saved to a file named like job_name.o658392. What I want to do, using that example, is to name the output file 658392.job_name.log instead. I know I can specify the name of the…
jonaslb
  • 179
  • 9