3

I am trying to run a pycuda program across two gpus. I have read a great post by Talonmies explaining how you do it with the threading library, the post also mentioned this is possible with mpi4py.

When I run mpi4py with pycuda, program gives the error: self.ctx = driver.Device(gpuid).max_context pycuda._driver.logicError: cuDeviceGet failed: not initialized

Perhaps this is due to my attempt to initalize two of the gpu devices simutanously. Does anyone have a very short example of how we can get 2 gpus working with mpi4py?

user847078
  • 31
  • 2
  • What does "initalize two of the gpu devices simultaneously" mean? With mpi4py the multi-gpu model is incredibly simple: have each rank in the communicator choose a unique GPU and establish a context on that card. You can either have one rank determine the GPUs for each member of the communicator and broadcast the, or just derive the GPU ID from the process rank. Can you post the code that isnfailing? – talonmies Jul 18 '11 at 07:58
  • +1 - please please does anyone have any advice on this!? – jtlz2 Jun 27 '14 at 11:11
  • http://maldun.lima-city.de/introduction_to_python/PyCUDA.html#mpi-and-pycuda-ref - except the code doesn't succeed... – jtlz2 Jun 27 '14 at 11:16

1 Answers1

1

For anyone who chances upon this question, here is a working mpi4py+pycuda example.

lebedov
  • 1,371
  • 2
  • 12
  • 27