Here is a question available but the answer is not relevant.
This code will transfer the model to multiple GPUs but how to transfer data on GPU's?
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
model = nn.DataParallel(model, device_ids=[0, 1])
My question is what is the replacement of
X_batch, y_batch = X_batch.to(device), y_batch.to(device)
What should be device
equal to in the DataParallel
case