I am trying to deploy a simple test application with TensorFlow lite. I want to use the Coral Edge TPU Stick on my device, so I have to perform Quantization Aware Training. I want to fit a function f(x) = 2 x - 1
. My training code looks like this:
import tensorflow as tf
import numpy as np
from tensorflow import keras
from tensorflow.contrib import lite
# Create model
model = keras.models.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
# Quantization aware training
sess = keras.backend.get_session()
tf.contrib.quantize.create_training_graph(sess.graph)
sess.run(tf.global_variables_initializer())
tf.summary.FileWriter('logs/', graph=sess.graph)
model.compile(optimizer='sgd', loss='mean_squared_error')
# Training data
xs = np.array([ -1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([ -3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)
model.fit(xs, ys, epochs=500, batch_size=2)
# Test the model for plausbility
print(model.predict([10.0]))
# Display the quantization-relevant variables
for node in sess.graph.as_graph_def().node:
if 'weights_quant/AssignMaxLast' in node.name \
or 'weights_quant/AssignMinLast' in node.name:
tensor = sess.graph.get_tensor_by_name(node.name + ':0')
print('{} = {}'.format(node.name, sess.run(tensor)))
# Save the keras model
keras_file = 'quant_linear.h5'
keras.models.save_model(model, keras_file)
# Convert the keras model into a tflite model
converter = lite.TocoConverter.from_keras_model_file(keras_file)
converter.post_training_quantize = True
tflite_model = converter.convert()
open('quant_linear.tflite', 'wb').write(tflite_model)
As output, I get (keras and CUDA specific output is omitted):
[[18.86733]]
dense/weights_quant/AssignMinLast = 0.0
dense/weights_quant/AssignMaxLast = 1.984399676322937
Two things to note here: the model is plausible, it should output a value close to 19. Obviously, it also uses quantized weights. If I do not enable quantization aware training, the two variables won't show up.
Additionally, this model can be loaded and executed by a tf-lite interpreter instance. To be able to use it with TPU support, however, I have to convert it with the tpuedge_compiler
. After installing it, I execute
edgetpu_compiler quant_linear.tflite
Unfortunately, it seems to be unable to recognize that the model is quantized. It outputs
user@ubuntu:~/TensorFlow$ edgetpu_compiler quant_linear.tflite
Edge TPU Compiler version 1.0.249710469
INFO: Initialized TensorFlow Lite runtime.
Invalid model: quant_linear.tflite
Model not quantized
I have tried to compile it online, which also fails. Is this a bug or did I mess something up during training/converting? Also, maybe there is tool to verify that I really use a quantized model?
Thanks!