I have retrained some tensorflow2.0 model, it's working as 1 class object detector, prepared with object detection api v2 (https://tensorflow-object-detection-api-tutorial.readthedocs.io/).
After that I have converted it to onnx (tf2onnx.convert) and tested - got the same inference results.
I have tested all pretrained models (downloaded from tf model zoo https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md):
- ssd_mobilenet_v2_320x320_coco17_tpu-8
- ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8
- ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8
- ssd_resnet50_v1_fpn_640x640_coco17_tpu-8
I have retrained it by using some small batch of data.
The problem is with using it with gstreamer/deepstream. As I have seen, gstreamer consumes the onnx model, or model after converting it to TensorRT. (If I will provide onnx - model is also converted to TensorRT of course, but it's done by gstreamer right before running)
I was also trying to same pipeline with train->convert to onnx->convert to trt (or just provide onnx model to gstreamer). Same issue.
Error:
ERROR: [TRT]: [graph.cpp::computeInputExecutionUses::519] Error Code 9: Internal Error ((Unnamed Layer* 747) [Recurrence]: IRecurrenceLayer cannot be used to compute a shape tensor)
- TensorRT Version: 8.2.1.8
- tf2onnx Version: 1.9.3
Is there any chance to get some help? Or maybe I should skip the onnx model and just convert it from tensorflow to tensorRT engine? Is it possible?
Of course I can upload the model if it would help.
BR!