0

Hi I have two GPU cards: '0' for NVIDIA getforce and '1' for AMD radeon. I am running a deep learning model using pytorch and installed pytorch with cuda 11.7 for NVIDIA card and installed pytorch with Rocm5.2 for AMD card. However, all the calculations are still happening on the NVIDIA card.

Could you help to split my calculations on both cards or when NVIDIA is about to be full, switch to AMD card?

What I tried to do is to make both my cards visible using

os.environ["CUDA_VISIBLE_DEVICES"]="0,1"
M.Z
  • 21
  • 4

0 Answers0