0

I have a tflite qunatized model (int8), The model running with

import tensorflow as tf
interpreter = tf.lite.Interpreter(model_path=model_path,
                                experimental_delegates=[])

is 40x faster than:

import tflite_runtime.interpreter as tflite
interpreter = tflite.Interpreter(model_path=model_path,
                                experimental_delegates=[])

Any explications? I have to used the tflite_runtime to run the model with USB google coral

Thanks

0 Answers0