I was wondering if a model trained on the GPU could be use to run inference with the cpu ? (And vice versa) Thanks to you!
Asked
Active
Viewed 7,281 times
1 Answers
17
You can do it as long as your model doesn't have explicit device allocations. IE, if your model has blocks like with tf.device('gpu:0')
, it'll complain when you run it on model without GPU.
In such cases you must make sure your imported model doesn't have explicit device assignments, for instance, but using clear_devices
argument in import_meta_graph

Yaroslav Bulatov
- 57,332
- 22
- 139
- 197
-
Just to be sure, it means than with clear_devices=True I can use my model, even if I trained it with blocks like with tf.device('gpu:0'), on my CPU ? – Pusheen_the_dev Dec 05 '16 at 17:47
-
yes, that clears devices and lets tf automatic placement algorithm use the available devices – Yaroslav Bulatov Dec 05 '16 at 17:48
-
it's also possible to save the model with cleared devices, to a SavedModel serialization: see, for example `add_meta_graph()` in the [SavedModelBuilder](https://www.tensorflow.org/api_docs/python/tf/saved_model/builder/SavedModelBuilder) class. – Mark Nov 15 '17 at 22:07
-
@YaroslavBulatov Can the `json` or `yaml` files themselves be manually rewritten to remove `with tf.device` statements? – adam.hendry Sep 05 '20 at 17:08