0

I'm making an object detection app for Android, I got good performance while training with ssd_mobilenet_v1_fpn model.

I exported frozen inference graph, converted to tflite and quantized it to improve performance. But when i try it on TensorFlow Lite Object Detection Android Demo the app crashes.

The app works perfectly with the default model (ssd_mobilenet_v1) but unfortunately isn't good for small objects detection and classification.

Here my quantized ssd_mobilenet_v1_fpn model:

Google Drive: https://drive.google.com/file/d/1rfc64nUJzHQjxigD6hZ6FqxyGhLRbyB1/view?usp=sharing

OneDrive: https://univpr-my.sharepoint.com/:u:/g/personal/vito_filomeno_studenti_unipr_it/EXtl9aitsUZBg6w3awcLbfcBGBgrSV4kqBdSlS3LJOXKkg?e=kHEcy2

Here the unquantized model:

Googe Drive: https://drive.google.com/file/d/11c_PdgobP0jvzTnssOkmcjp19DZoBAAQ/view?usp=sharing

OneDrive: https://univpr-my.sharepoint.com/:u:/g/personal/vito_filomeno_studenti_unipr_it/EcVpJ44Daf5OgpVTYG1eD38B6P1mbnospRb8wXU_WQRh0g?e=cIgpQ2

For quantization i used this command line:

bazel run -c opt tensorflow/lite/toco:toco -- \ --input_file=tflite_graph.pb \ --output_file=detect_quant.tflite \ --input_shapes=1,640,480,3 \ --input_arrays=normalized_input_image_tensor \ --output_arrays=TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3 \ --inference_type=QUANTIZED_UINT8 \ --mean_values=128 \ --std_values=128 \ --change_concat_input_ranges=false \ --allow_custom_ops --default_ranges_min=0 --default_ranges_max=6

I also tried tflite converter python api, but it doesn't work for this model.

Here the android logcat errors: Errors

2020-09-16 18:54:06.363 29747-29747/org.tensorflow.lite.examples.detection E/Minikin: Could not get cmap table size!

2020-09-16 18:54:06.364 29747-29767/org.tensorflow.lite.examples.detection E/MemoryLeakMonitorManager: MemoryLeakMonitor.jar is not exist!

2020-09-16 18:54:06.871 29747-29747/org.tensorflow.lite.examples.detection E/BufferQueueProducer: [] Can not get hwsched service

2020-09-16 18:54:21.033 29747-29786/org.tensorflow.lite.examples.detection A/libc: Fatal signal 6 (SIGABRT), code -6 in tid 29786 (inference)

Has anyone managed to use an fpn model on android? or a model other than ssd_mobilenet_v1?

3 Answers3

0

First, when applying quantization, the performance is getting poorer. The more of quantization (to float => to int) the poorer it becomes. For detection model, the result tends to be not well operated on small objects and not well fit the bounding boxes of big objects. I am working on a paper to solve this problem. May come back to you how to solve it with ssd soon.

Second, I dont have the access to see your model, mate. However, according to this and my experience quantize, you can convert to any detection model with ssd backbone. You may want to follow the instruction I gave you to make sure the quantization ok

dtlam26
  • 1,410
  • 11
  • 19
  • I changed privacy settings on Google Drive an also added OneDrive link just in case. However I suspect that the problem is not with the precision or quantization, but with the model itself. I thought it might work on mobile devices because it's a mobilenet, but I couldn't find anyone who did. – vito filomeno Sep 17 '20 at 10:05
  • Ssd work great on the pc model. However, when convert to tflite, this model has already got problem due to lower bit precision. Ssd with mobilenet is a really deep model and quantization in every node layer has reduced the performance dynamically – dtlam26 Sep 17 '20 at 18:36
  • Can you also attached the quantization code and your pretrained model before quantization? For ideal testing for deployable on android device, you should try to inference the tflite from the source I give you. And I see from the input, why is your input is 640x480? – dtlam26 Sep 17 '20 at 18:48
  • I updated the question. Images from android camera are in 4:3 resolution, so I scale them to 640x480 for inference in order to avoid distortion. The default input in mobilenet_v1_fpn from model zoo is 640x640, it should be even heavier than my custom model. – vito filomeno Sep 18 '20 at 15:23
  • can you rescale of your screen image and input to this already quantize model in here to check? https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md – dtlam26 Sep 19 '20 at 12:53
  • These are quantized model providing from google with different images input size. Usually mobilenet is prefered with square image.You should keep this square input by rescale your image from 620x480 to that size. Only segmentation should obtain the input not square in computer vision – dtlam26 Sep 19 '20 at 12:55
  • @vitofilomeno any updates? if no, you should close the topic – dtlam26 Sep 23 '20 at 18:43
0

You should change --default_ranges_max=255 if the input is image and use tflite_convert. BTW, why couldn't you use python APIs for this? If the input is frozen graph, you could convert like the following:

converter = tf.lite.TFLiteConverter.from_frozen_graph('tmp.pb', input_arrays=..., output_arrays=...)
tflite_model = converter.convert()

Meanwhile, Object detection APIs contains a doc for Running TF2 Detection API Models on mobile. Also it contains the python scripts export_tflite_graph_tf2.py.

Yuqi Li
  • 239
  • 1
  • 3
0

I couldn't find a way to run this model on Android, it probably isn't possible or my phone isn't powerful enough.

However I solved my problem by using two different networks, MobilenetV1 for object detection (detect only one class "object"), and one for classification (takes object's bounding boxes and classifies them). It's not the most elegant solution but at least it works.