To run the tflite converter on your local machine, you will need bazel and toco.
And if you read some issues in GitHub, in some versions of Tensrflow tflite causes a lot of trouble. To overcome this trouble, some recommend using tf-nightly!
To avoid all this, simply use Google Colab to convert your .pb into .lite or .tflite.
Since Colab started having the "upload" option for uploading your files into the current kernel, this I think is the most simple way without having to worry about other packages and their dependencies.
Here is the code for the same:
from google.colab import drive
drive.mount('/content/drive')
!cd drive/My\ Drive
from google.colab import files
pbfile = files.upload()
import tensorflow as tf
localpb = 'frozen_inference_graph_frcnn.pb'
tflite_file = 'frcnn_od.lite'
print("{} -> {}".format(localpb, tflite_file))
converter = tf.lite.TFLiteConverter.from_frozen_graph(
localpb,
["image_tensor"],
['detection_boxes']
)
tflite_model = converter.convert()
open(tflite_file,'wb').write(tflite_model)
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
"""**download optimized .lite file to local machine**"""
files.download(tflite_file)
There are two ways in which you can upload your .pb file to the current session:
i) (The easy way) After running the first cell in the above notebook, the drive will be mounted. So on your left part of the screen go to the files column and right click on the folder you want to upload your .pb file and choose upload.
Then use "ls" and "cd" commands to work your way into the folder and run the tflite converter cell.
ii) Run the cell with files.upload() command and click on browse and choose the .pb file from your local machine.
Once the file is uploaded, give its path to the variable "localpb" and also the name of the .lite model. Then simply run the cell having the "TFLiteConverter" comamnd.
And voila. You should have a tflite model appear in your drive. Simply right-click on it and download to your local machine to run inferences.