0

I just bought a GTX 1080Ti and I wanted to know if I can use both my old GTX 1070 and GTX 1080Ti in parallel for mini batching with either TensorFlow or PyTorch.

My main concern is:

Would the GTX 1070 bottleneck the GTX 1080Ti or the power of each cards will be used to their maximum?

I know that in a SLI configuration the amount of total VRAM will be equal to the card with the lowest amount (the GTX 1070 here with 8GB or VRAM) but does the same thing happen during training with tensorflow/pytorch with no SLI involved?

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
E-Kami
  • 2,529
  • 5
  • 30
  • 50

1 Answers1

0
  1. You can't run SLI for Deep Learning.
  2. You are bottlenecked by the PCIe interconnect. If you aren't using both x16 lines, then one will be slower.
  3. I'm not really sure what will happen if there are power issues.
  4. I can run a GTX 1060 and GTX 970 in parallel for mini-batching.
jkschin
  • 5,776
  • 6
  • 35
  • 62