1

I'm working on transfer learning a coco trained yolov8 model to detect objects in an entirely different use case. I get really encouraging performance metrics when I reload the trained model from its model.pt file using the ultralytics library and inbuilt functions.

However, I have tried to export the model to .onnx and failed to achieve the same metrics.

  • I trained it on rectangular images of size 1280,720 with the flags rect=True, imgsz=1280
  • I exported it like this: yolo task=detect mode=export model=runs/detect/last.pt imgsz=720,1280 simplify=true format=onnx opset=12
  • I tried without an opset, opset11 and opset12 (official docs recommend opset12)
  • I tried to export it with and without simplify
  • I've tried to use onnxruntime library using this github repo here as an example
  • I tried to use this python example by ultralytics themselves

None of the approaches above have given me the same results as using predict.py and loading the model from the original .pt file. Has anyone been able to produce the same results with .onnx as they did with their .pt model and on the gpu? If yes, can you share how you did it?

moonboi
  • 21
  • 2
  • This is my training command if it helps: `yolo detect train data=data/custom.yaml model=yolov8n.pt epochs=100 imgsz=1280 rect=True device=0 batch=8` – moonboi Mar 10 '23 at 20:37
  • What is the performance drop you get @moonboi? – Mike B Mar 11 '23 at 16:17
  • The ultralytics package will resize the image and convert it to a numpy array. You will need to do the same if you haven’t already. – Brian Low Jul 13 '23 at 01:51

1 Answers1

0

Facing same issue here. Export it using opset=12 or even without it. Poorly performance when using opencv onnx model. But the problems seems to sit on opencv. I don't know what happens under the hood. If I try to use exported onnx model with Ultralytics Yolo it worked perfectly fine.

from ultralytics import YOLO

import cv2

model = YOLO("../runs/detect/train/weights/best.onnx")

im2 = cv2.imread("1.png")
results = model.predict(source=im2, save=True, save_txt=True, imgsz=1280)  # save predictions as labels

for result in results:
    boxes = result.boxes  # Boxes object for bbox outputs
    masks = result.masks  # Masks object for segmentation masks outputs
    probs = result.probs  # Class probabilities for classification outputs

Even loading onnx model, it performs exactly as loading .pt model.