I just bought a GTX 1080Ti and I wanted to know if I can use both my old GTX 1070 and GTX 1080Ti in parallel for mini batching with either TensorFlow or PyTorch.
My main concern is:
Would the GTX 1070 bottleneck the GTX 1080Ti or the power of each cards will be used to their maximum?
I know that in a SLI configuration the amount of total VRAM will be equal to the card with the lowest amount (the GTX 1070 here with 8GB or VRAM) but does the same thing happen during training with tensorflow/pytorch with no SLI involved?