2

How you can program keras or tensorflow to partitionate training on multiple GPU, let's say you are in an amaozn ec2 instance that has 8 GPU's and you want to use all of them to train faster, but your code is just for a single cpu or GPU ?

Hector Blandin
  • 158
  • 1
  • 12
  • This is a bit broad for this site. You are asking us to design your solution for you. https://stackoverflow.com/help/how-to-ask –  Oct 21 '17 at 14:01

1 Answers1

2

Yes, can run Keras models on multiple GPUs. This is only possible with the TensorFlow backend for the time being, because the Theano feature is still rather new. We are looking at adding support for multi-gpu in Theano in the near future (it should be fairly straightforward).

With the TensorFlow backend, you can achieve this the same way as you would in pure TensorFlow: by using the with tf.device(d) scope when defining Keras layers.

Originally from here

Stepan Novikov
  • 1,402
  • 12
  • 22
  • 1
    As a complement to this answer, you can follow the answers in the following link. The idea is to set `with tf.device('/gpu:0'):`, or `with tf.device('/gpu:1')`, etc. for each group of layers. https://stackoverflow.com/questions/46366216/tensorflow-is-it-possible-to-manually-decide-which-tensors-in-a-graph-go-to-th – Daniel Möller Oct 21 '17 at 14:08
  • @DanielMöller: Thank you! – Hector Blandin Oct 21 '17 at 16:03