0

I am replicating one of the basic MNIST tutorials using cleverhans. I have access to a multi-gpu machine and it seems that the library is taking full advantage of the multi-gpu architecture during training, which is great.

I would like, however, to be able to specify the gpu device I'd like to use for the training.

I am aware of the devices argument of the train function, however I have tried multiple values for that field and it always allocates memory on all gpus.


   train(
       sess,
       loss,
       x_train,
       y_train,
       devices=['/device:GPU:2', ],
       # also tried:
       # devices=["/GPU:0"],
       # devices=[2, ],
       # devices=['/gpu:2']
       # devices=['gpu:2']
       evaluate=evaluate,
       args=train_params,
       rng=rng
   )

Is there any way to use a single (or a few), specific, gpu, and have memory allocated only there?

Thanks

ClonedOne
  • 569
  • 4
  • 20

1 Answers1

0

One alternative way to specify the GPUs you'd like your python process to have access to is by prepending CUDA_VISIBLE_DEVICES=0,1,2 to your python command. This will use GPUs 0, 1, and 2 only.

CUDA_VISIBLE_DEVICES=0,1,2 python script.py

If there is a bug with the devices argument in our train method, feel free to open an issue or PR fixing the bug.

  • Thanks for the reply, yes I am using the env variable now. I will look into why the `devices` parameter didn't work and let you know if I find something – ClonedOne Jun 10 '19 at 13:25