0

Environment: tensorflow=2.0, tensorflow-model-optimization=0.3.0, python=3.6.8

when convert keras model to tflite through below code:

m1='ownmodel_pruW.h5'
model=tf.keras.models.load_model(m1)
tflite_model_file = 'ownnet.tflite'
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
with open(tflite_model_file, 'wb') as f:
    f.write(tflite_model)

error as below

ConverterError: See console for info. 2020-05-14 20:43:12.220536: F tensorflow/lite/toco/import_tensorflow.cc:2471] Check failed: status.ok() Neither input_content (0) nor float_val (73723) have the right dimensions (73728) for this float tensor (while processing node 'Modelnet/conv2/Conv2D/ReadVariableOp')

Any method to solve this error?

  • Hi it seems like you have an undefined shape in your model. Can you try the newest version of Tensorflow (pip install tf-nightly) to see if this resolves your issue? – daverim May 15 '20 at 05:25
  • Thanks for your suggestions. Tried different versions of tensorflow, results as below: 1. Tensorflow==1.13.1: works fine after adding code "converter.post_training_quantize = True" – Bojie Sheng May 15 '20 at 08:52
  • 2. tensorflow==1.15: tf.lite.Interpreter: RuntimeError: Encountered unresolved custom op: FusedBatchNormV3.Node number 2 (FusedBatchNormV3) failed to prepare. – Bojie Sheng May 15 '20 at 08:55
  • 3. Tensorflow-gpu==2.1/2.2 or Tensorflow==2.1/2.0: ConverterError as shown in this topic. – Bojie Sheng May 15 '20 at 08:56
  • 4. Tf-nightly==2.3: (a lot of error) ...toco/import_tensorflow.cc:1324] Converting unsupported operation: FusedBatchNormV3 tooling_util.cc:627] Check failed: dim >= 1 (-1 vs. 1) – Bojie Sheng May 15 '20 at 08:56
  • tf-nightly should be working even with the dim size issue. I guess you are not retraining your model with tf-nightly. Can you provide the model ownmodel_pruW.h5 -- it may be a mismatch between the training code and converter, or describe how you built it? – daverim May 16 '20 at 01:16
  • Many thanks for your answers. I trained the model, then pruned the model, finally convertered the model to tflite, but has this error. Please check my code here: https://github.com/pkb14197/tensorflow-quantization-error-2471/blob/master/ownmodelPrune1.ipynb – Bojie Sheng May 16 '20 at 17:13

0 Answers0