41

When a computer has multiple CUDA-capable GPUs, each GPU is assigned a device ID. By default, CUDA kernels execute on device ID 0. You can use cudaSetDevice(int device) to select a different device.

Let's say I have two GPUs in my machine: a GTX 480 and a GTX 670. How does CUDA decide which GPU is device ID 0 and which GPU is device ID 1?


Ideas for how CUDA might assign device IDs (just brainstorming):

  • descending order of compute capability
  • PCI slot number
  • date/time when the device was added to system (device that was just added to computer is higher ID number)

Motivation: I'm working on some HPC algorithms, and I'm benchmarking and autotuning them for several GPUs. My processor has enough PCIe lanes to drive cudaMemcpys to 3 GPUs at full bandwidth. So, instead of constantly swapping GPUs in and out of my machine, I'm planning to just keep 3 GPUs in my computer. I'd like to be able to predict what will happen when I add or replace some GPUs in the computer.

solvingPuzzles
  • 8,541
  • 16
  • 69
  • 112

4 Answers4

45

Set the environment variable CUDA_DEVICE_ORDER as:

export CUDA_DEVICE_ORDER=PCI_BUS_ID

Then the GPU IDs will be ordered by pci bus IDs.

Liang Xiao
  • 1,490
  • 2
  • 14
  • 21
  • 8
    With this set, the CUDA device id's are consistent with `nvidia-smi`'s output! IMO this is a must-set for machine learning on a multi-gpu machine. – Falcon Jul 27 '17 at 00:51
27

CUDA picks the fastest device as device 0. So when you swap GPUs in and out the ordering might change completely. It might be better to pick GPUs based on their PCI bus id using:

cudaError_t cudaDeviceGetByPCIBusId ( int* device, char* pciBusId )
   Returns a handle to a compute device.

cudaError_t cudaDeviceGetPCIBusId ( char* pciBusId, int  len, int  device )
   Returns a PCI Bus Id string for the device.

or CUDA Driver API cuDeviceGetByPCIBusId cuDeviceGetPCIBusId.

But IMO the most reliable way to know which device is which would be to use NVML or nvidia-smi to get each device's unique identifier (UUID) using nvmlDeviceGetUUID and then match it do CUDA device with pciBusId using nvmlDeviceGetPciInfo.

Przemyslaw Zych
  • 2,000
  • 1
  • 21
  • 24
  • 4
    By "fastest" do you mean in terms of clock speed? – solvingPuzzles Dec 09 '12 at 10:06
  • 3
    Some heuristics are used to estimate the theoretical speed of the GPU. They take into account e.g. chip architecture, clock speed, driver model (on windows TCC is preffered). – Przemyslaw Zych Dec 09 '12 at 16:08
  • At the moment, I have 3 CUDA-capable GPUs in my machine: a GTX680, a GTX9800 (an ancient, slow GPU that I just use for graphics), and a C2050. Oddly, the GTX9800 gets a lower number than the C2050... strange. – solvingPuzzles Dec 26 '12 at 05:53
  • 2
    Only GPU with index 0 is the fastest. Rest of indexes are not sorted by speed. Does GTX 9800 has index 0? If not then everything is working as expected. – Przemyslaw Zych Dec 26 '12 at 07:43
  • 1
    Nope, the GTX9800 doesn't have index 0. It makes more sense now. – solvingPuzzles Dec 26 '12 at 07:57
  • In CUDA 8, there is an [environment variable](http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars) which allows you to modify the enumeration order of the CUDA runtime API. – Robert Crovella Apr 01 '17 at 00:24
5

The CUDA Support/Choosing a GPU suggest that

when running a CUDA program on a machine with multiple GPUs, by default CUDA kernels will execute on whichever GPU is installed in the primary graphics card slot.

Also, the discussion at No GPU selected, code working properly, how's this possible? suggests that CUDA does not map the "best" card to device 0 in general.

EDIT

Today I have installed a PC with a Tesla C2050 card for computation and a 8084 GS card for visualization switching their position between the first two PCI-E slots. I have used deviceQuery and noticed that GPU 0 is always that in the first PCI slot and GPU 1 is always that in the second PCI slot. I do not know if this is a general statement, but it is a proof that for my system GPUs are numbered not according to their "power", but according to their positions.

Vitality
  • 20,705
  • 4
  • 108
  • 146
  • 2
    I agree. I've had cases where a machine has a modern GTX6xx Kepler and an ancient G80, and device 0 is the G80. The opposite has happened to me too. The "order of PCIe slots" explanation sounds reasonable. I haven't paid much attention to the PCIe slot order that I used, other than trying to reserve PCIe_3 slots for PCIe_3-compatible GPUs. – solvingPuzzles Sep 23 '13 at 02:40
5

The best solution I have found (tested in tensorflow==2.3.0) is to add the following before anything that may import tensorflow:

import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="0,3"  # specify which GPU(s) to be used

This way, the order that TensorFlow orders the GPUs will be the same as that reported by tools such as nvidia-smi or nvtop.

Thomas Tiotto
  • 379
  • 1
  • 3
  • 12
  • How does this in any way explain what order CUDA enumerates devices in, which is the question? – talonmies Sep 22 '20 at 12:28
  • Because the OP asked for "I'd like to be able to predict what will happen when I add or replace some GPUs in the computer" and my answer accomplishes just that. – Thomas Tiotto Mar 11 '21 at 10:15