12

Anyone know if Tensorflow Lite has GPU support for Python? I've seen guides for Android and iOS, but I haven't come across anything about Python. If tensorflow-gpu is installed and tensorflow.lite.python.interpreter is imported, will GPU be used automatically?

Hichem BOUSSETTA
  • 1,791
  • 1
  • 21
  • 27
John M.
  • 2,642
  • 7
  • 26
  • 55
  • Yes! It automatically process the data on GPU. But if you want to use tensorflow lite in any embedded devices than tensorflow provides **TensorFlow Lite GPU delegate**. You can read [this](https://www.tensorflow.org/lite/performance/gpu) for more information. – Krunal V May 17 '19 at 11:24
  • @kruxx But the guide doesn't seem to suggest Python is supported. Does that mean TFLite doesn't support GPU for Python then? – John M. May 17 '19 at 11:26
  • If you change your device in `tf.device` from CPU to GPU and you can use the same model to run on GPU as well. – Krunal V May 17 '19 at 11:32
  • @kruxx I've tried that, but I'm not getting any GPU activity. Is the only way to get the GPU to do the processing by means of GPU delegate though? If so, setting `tf.device` wouldn't be adequate. – John M. May 17 '19 at 11:51
  • @JohnM are there any news on this topic? Have you found a solution? – Patrick Na May 18 '20 at 14:09

3 Answers3

3

According to this thread, it is not.

RomanS
  • 883
  • 8
  • 14
1

one solution is to convert tflite to onnx and use onnxruntime-gpu

convert to onnx with https://github.com/onnx/tensorflow-onnx:

pip install tf2onnx
python3 -m tf2onnx.convert --opset 11 --tflite path/to/model.tflite  --output path/to/model.onnx

then pip install onnxruntime-gpu

and run like:

session = onnxruntime.InferenceSession(('/path/to/model.onnx'))
raw_output = self.detection_session.run(['output_name'], {'input_name': img})

you can get the input and output names by:

for i in range(len(session.get_inputs)):
    print(session.get_inputs()[i].name)

and the same but replace 'get_inputs' with 'get_outputs'

Megan Hardy
  • 397
  • 4
  • 12
-1

You can force the computation to take place on a GPU:

import tensorflow as tf
with tf.device('/gpu:0'):
   for i in range(10):
         t = np.random.randint(len(x_test) )
         ...

Hope this helps.