9

When I want to put the model on the GPU, I get the following error:

"RuntimeError: Input and hidden tensors are not at the same device, found input tensor at cuda:0 and hidden tensor at cpu"

However, all of the above had been put on the GPU:

for m in model.parameters():
    print(m.device) #return cuda:0
if torch.cuda.is_available():
    model = model.cuda()
    test = test.cuda() # test is the Input

Windows 10 server
Pytorch 1.2.0 + cuda 9.2
cuda 9.2
cudnn 7.6.3 for cuda 9.2

iacob
  • 20,084
  • 6
  • 92
  • 119
kaiyu
  • 93
  • 1
  • 1
  • 3
  • 3
    Possible duplicate of [Running LSTM with multiple GPUs gets "Input and hidden tensors are not at the same device"](https://stackoverflow.com/questions/54511769/running-lstm-with-multiple-gpus-gets-input-and-hidden-tensors-are-not-at-the-sa) – Shai Sep 25 '19 at 09:55
  • you need to send your inputs also to cuda. so for example your X_train_batch and label_batch etc... – basilisk Sep 25 '19 at 11:27

2 Answers2

11

You need to move the model, the inputs, and the targets to Cuda:

if torch.cuda.is_available():
   model.cuda()
   inputs = inputs.cuda() 
   target = target.cuda()
iacob
  • 20,084
  • 6
  • 92
  • 119
ESZ
  • 445
  • 1
  • 4
  • 10
1

This error occurs when PyTorch tries to compute an operation between a tensor stored on a CPU and one on a GPU. At a high level there are two types of tensor - those of your data, and those of the parameters of the model, and both can be copied to the same device like so:

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

data = data.to(device)
model = model.to(device)
iacob
  • 20,084
  • 6
  • 92
  • 119
  • Just a minor edit, there is no need to reassign model after putting it on the GPU; this might actually result in an error. Simply write ```model.to(device)```. – Ender Apr 13 '22 at 21:17