I know I can access the current GPU using torch.cuda.current_device()
, but how can I get a list of all the currently available GPUs?

- 25,404
- 19
- 49
- 81
-
what is wrong with `import torch; num_of_gpus = torch.cuda.device_count(); print(num_of_gpus);` ? – Charlie Parker Aug 18 '22 at 20:20
4 Answers
You can list all the available GPUs by doing:
>>> import torch
>>> available_gpus = [torch.cuda.device(i) for i in range(torch.cuda.device_count())]
>>> available_gpus
[<torch.cuda.device object at 0x7f2585882b50>]

- 25,404
- 19
- 49
- 81
-
7This is not a correct answer. `torch.cuda.device(i)` returns a context manager that causes future commands to use that device. Putting them all in a list like this is pointless. All you really need is `torch.cuda.device_count()`, your cuda devices are `cuda:0`, `cuda:1` etc. up to `device_count() - 1`. – Joel Croteau Aug 07 '21 at 02:58
-
But this gives you no information about the GPU, which is not really *listing* the GPUs. I want to see the name, model, etc, so I know I'm using the right one. – Jack M May 26 '22 at 16:07
-
1what is wrong with `import torch; num_of_gpus = torch.cuda.device_count(); print(num_of_gpus);` ? – Charlie Parker Aug 18 '22 at 20:20
-
@CharlieParker: Device count does not give you the specific NAME or TYPE for the GPU resource as in e.g.: NVIDIA 2070 GTI. But device.property(i).name does. – Thornhale Jan 16 '23 at 23:08
I know this answer is kind of late. I thought the author of the question asked what devices are actually available to Pytorch not:
- how many are available (obtainable with
device_count()
) OR - the device manager handle (obtainable with
torch.cuda.device(i)
) which is what some of the other answers give.
If you want to know what the actual GPU name is (E.g.: NVIDIA 2070 GTI etc.) try the following instead:
import torch
for i in range(torch.cuda.device_count()):
print(torch.cuda.get_device_properties(i).name)
Note the use of get_device_properties(i)
function. This returns a object that looks like this:
_CudaDeviceProperties(name='NVIDIA GeForce RTX 2070', major=8, minor=6, total_memory=12044MB, multi_processor_count=28))
This object contains a property called name
. You may optionally drill down directly to the name property to get the human-readable name associated with the GPU in question.
Check how many GPUs are available with PyTorch
import torch
num_of_gpus = torch.cuda.device_count()
print(num_of_gpus)
In case you want to use the first GPU from it.
device = 'cuda:0' if torch.cuda.is_available() else 'cpu'
Replace 0 in the above command with another number If you want to use another GPU.

- 792
- 1
- 10
- 36

- 1,428
- 14
- 25
-
This answer answers how many GPU devices one has and not list out explicitly all the GPUs available (presumably with names and type). To do that one would use device.property(). Optionally, one may just drill down to the name property. – Thornhale Jan 16 '23 at 23:10
Extending the previous replies with device properties
$ python3 -c "import torch; print([(i, torch.cuda.get_device_properties(i)) for i in range(torch.cuda.device_count())])"
[(0, _CudaDeviceProperties(name='NVIDIA GeForce RTX 3060', major=8, minor=6, total_memory=12044MB, multi_processor_count=28))]

- 31
- 1
-
what is wrong with `import torch; num_of_gpus = torch.cuda.device_count(); print(num_of_gpus);` ? – Charlie Parker Aug 18 '22 at 20:20