0

I'm using the Tensorflow (using the Keras API) in Python 3.0. I'm using the VGG19 pre-trained network to perform style transfer on an Nvidia RTX 2070.

The largest input image that I have is 4500x4500 pixels (I have removed the fully-connected layers in the VGG19 to allow for a fully-convolutional network that handles arbitrary image sizes.) If it helps, my batch size is just 1 image at a time currently.

1.) Is there an option for parallelizing the evaluation of the model on the image input given that I am not training the model, but just passing data through the pre-trained model?

2.) Is there any increase in capacity for handling larger images in going from 1 GPU to 2 GPUs? Is there a way for the memory to be shared across the GPUs?

I'm unsure if larger images make my GPU compute-bound or memory-bound. I'm speculating that it's a compute issue, which is what started my search for parallel CNN evaluation discussions. I've seen some papers on tiling methods that seem to allow for larger images

wandadars
  • 1,113
  • 4
  • 19
  • 37
  • 2
    Could you be more specific about your problem, API you're using? You can distribute evaluation across multiple GPUs with `tf.distribute.MirroredStrategy` – Sharky Mar 19 '19 at 19:35
  • @Sharky I have tried to make it a bit clearer. Is the MirroredStrategy something that can be used when operating Tensorflow with eager execution enabled? – wandadars Mar 27 '19 at 17:43
  • Yes it can, but all it will do is evaluate num_gpus images at a time. I won't affect memory limit. – Sharky Mar 27 '19 at 17:59

0 Answers0