6

By default PBS submits my serial jobs to all the nodes in a queue before using up more resources(cpus) from the nodes.

Can I force PBS to submit my jobs to one node till it exhausts all the CPUS of that node (say 12 cpus; also given that the memory requirement of 12 serial jobs is less than the memory assigned to each node) before it submits the 13th job to the next node.

I want to do this so that later when I want to submit a job with higher memory requirement, I don't go into 'queue' mode because all the nodes have some jobs running.

Ideally I should have separate queues for this purpose but I want my queues to be dynamic in the sense that I may require more large mem jobs which I cant run because though small memory queue is not being used fully, it has some jobs running on all nodes.

PyariBilli
  • 501
  • 1
  • 7
  • 17
  • 1
    What scheduler are you using? – dbeer Oct 16 '15 at 16:39
  • 2
    Node allocation depends on the scheduler used. If you are using OpenPBS in combination with Maui, then the allocation policy could be set as described [here](http://docs.adaptivecomputing.com/maui/5.2nodeallocation.php). `FIRSTAVAILABLE` or `CONTIGUOUS` should do the trick. Note that this affects the cluster scheduling policy as a whole and not only a specific subset of jobs. – Hristo Iliev Oct 21 '15 at 07:44
  • Another trick is to make sure that you don't have NODEACCESSPOLICY SINGLEJOB set in your configuration (for Maui, if you're using Maui). – dbeer Nov 20 '15 at 21:51

0 Answers0