1

I recently did an apt-get upgrade on my Ubuntu 16.04 system, and one of the things it did was grab a revised nvidia-375 package, and I notice now that if I run a process as

$ CUDA_VISIBLE_DEVICES=0 ./myprocess

it actually shows up on nvidia-smi as running on GPU 1, and similarly if I run

$ CUDA_VISIBLE_DEVICES=1 ./myprocess

nvidia-smi shows the process as running on GPU 0. This is the opposite behavior of what I was getting before the update, and seems opposite of what's described on a common reference on CUDA_VISIBLE_DEVICES.

Is there a "fix" for this? It's not a major inconvenience, but it would be nice to have some consistency.

.

PS- I'm not aware of any other NVIDIA-related "issues" on my system since I did this upgrade, just the GPU id switch.

talonmies
  • 70,661
  • 34
  • 192
  • 269
sh37211
  • 1,411
  • 1
  • 17
  • 39
  • 2
    You may be able to adjust it with [an environment variable](http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars), specifically `CUDA_​DEVICE_​ORDER=PCI_BUS_ID`, which should force CUDA to enumerate devices in the same order as `nvidia-smi` – Robert Crovella Apr 08 '17 at 22:22
  • That works! Thanks. If you make your comment an "answer," I'll give you my vote! – sh37211 Apr 08 '17 at 23:34

0 Answers0