1

Recently our university has bought an computing server with one multi-core Xeon and 4 powerfull GeForce videocard for lessons on discipline "High perfomance computing with CUDA".

There is Debian Squeeze on it. I'm trying to find a solution for organising task queue (or task spooler) so that students could launch their programs. Since there is only one CPU I guessed that we have to use queue: students' tasks pushed into queue and launched one-by-one.

Of course there should be ability to kill tasks when it has hung.

After some googling I've found two related things: Celery and Task Spooler

Could you suggest something?

Skyhawk
  • 14,200
  • 4
  • 53
  • 95
Kirill
  • 143
  • 1
  • 6

2 Answers2

2

You should consider Condor and one of the forks of Sun Grid Engine. Both Condor and SGE are heavily used in the academic HPC community for batch scheduling, and will allow you to scale gracefully should you acquire additional hardware.

justarobert
  • 1,869
  • 13
  • 8
  • 1
    slurm is another option which is quite popular, although perhaps more targeting efficient scheduling of large MPI jobs on large clusters. – janneb Apr 14 '11 at 07:16
1

How about setting up task queue using Celery and PyCUDA? You would be able to delegate jobs between CPUs as well as between CUDA cores.

Skyhawk
  • 14,200
  • 4
  • 53
  • 95