I'm working on transfer learning a coco trained yolov8 model to detect objects in an entirely different use case. I get really encouraging performance metrics when I reload the trained model from its model.pt file using the ultralytics library and inbuilt functions.
However, I have tried to export the model to .onnx and failed to achieve the same metrics.
- I trained it on rectangular images of size 1280,720 with the flags rect=True, imgsz=1280
- I exported it like this:
yolo task=detect mode=export model=runs/detect/last.pt imgsz=720,1280 simplify=true format=onnx opset=12
- I tried without an opset, opset11 and opset12 (official docs recommend opset12)
- I tried to export it with and without simplify
- I've tried to use onnxruntime library using this github repo here as an example
- I tried to use this python example by ultralytics themselves
None of the approaches above have given me the same results as using predict.py and loading the model from the original .pt file. Has anyone been able to produce the same results with .onnx as they did with their .pt model and on the gpu? If yes, can you share how you did it?