I trained YoloV5
on my custom dataset. I want to inference the trained model in C++ using Opencv (dnn::readnet
) so I tried both commands of below:
python export.py --data ...\lp.yaml --imgsz 480 --weights
best.pt
--include onnxpython export.py --data ...\lp.yaml --imgsz 480 --weights
best.pt
--include onnx--simplify
Obtained results from inferencing best.onnx
(from both commands) are weird in C++ and Python. In order to check sanity of the trained file, I use following commands (with and without --dnn
in 1, and 2) in Python:
1- (venv) E:...>python detect.py --data data/lp.yaml --source img3.bmp --weights
best.onnx
--imgsz 480
detect: weights=['best.onnx'], source=img3.bmp, data=data/lp.yaml, imgsz=[480, 480], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=Non e, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 YOLOv5 75f2b42 Python-3.8.3 torch-1.8.0+cpu CPU
Loading best.onnx for ONNX Runtime inference... image 1/1 E:\Projects\yolov5_alpr_win10\img3.BMP: 480x480 11 lps, 13.0ms Speed: 1.0ms pre-process, 13.0ms inference, 1.0ms NMS per image at shape (1, 3, 480, 480) Results saved to runs\detect\exp58
2- (venv) E:...>python detect.py --data data/lp.yaml --source img3.bmp --weights
best.onnx
--imgsz 480--dnn
detect: weights=['best.onnx'], source=img3.bmp, data=data/lp.yaml, imgsz=[480, 480], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=Non e, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=True, vid_stride=1 YOLOv5 75f2b42 Python-3.8.3 torch-1.8.0+cpu CPU
Loading best.onnx for ONNX OpenCV DNN inference... image 1/1 E:\Projects\yolov5_alpr_win10\img3.BMP: 480x480 11 lps, 62.8ms Speed: 1.0ms pre-process, 62.8ms inference, 0.0ms NMS per image at shape (1, 3, 480, 480) Results saved to runs\detect\exp59
Each of which results must include 2 lps
but as you see it is not. The results seems random bounding boxes without any relation to expected results but when I try following command (using .pt file
), results are perfect:
(venv) E:...>python detect.py --data data/lp.yaml --source img3.bmp --weights
best.pt
--imgsz 480
detect: weights=['best.pt'], source=img3.bmp, data=data/lp.yaml, imgsz=[480, 480], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 YOLOv5 75f2b42 Python-3.8.3 torch-1.8.0+cpu CPU
Fusing layers... YOLOv5ng summary: 157 layers, 1760518 parameters, 0 gradients, 4.1 GFLOPs image 1/1 E:\Projects\yolov5_alpr_win10\img3.BMP: 320x480 2 lps, 41.9ms Speed: 1.0ms pre-process, 41.9ms inference, 1.0ms NMS per image at shape (1, 3, 480, 480) Results saved to runs\detect\exp60
My environment:
Win 10
pycharm 2020.1.2
package | version |
---|---|
One | Two |
absl-py | 1.2.0 |
asttokens | 2.0.8 |
astunparse | 1.6.3 |
backcall | 0.2.0 |
beautifulsoup4 | 4.11.1 |
bs4 | 0.0.1 |
cachetools | 5.2.0 |
certifi | 2022.9.14 |
charset-normalizer | 2.1.1 |
colorama | 0.4.5 |
coloredlogs | 15.0.1 |
commonmark | 0.9.1 |
contourpy | 1.0.5 |
cycler | 0.11.0 |
decorator | 5.1.1 |
executing | 1.0.0 |
flatbuffers | 22.9.24 |
fonttools | 4.37.2 |
gast | 0.4.0 |
google-auth | 2.11.0 |
google-auth-oauthlib | 0.4.6 |
google-pasta | 0.2.0 |
grpcio | 1.49.0 |
h5py | 3.7.0 |
humanfriendly | 10.0 |
idna | 3.4 |
importlib-metadata | 4.12.0 |
ipython | 8.5.0 |
jedi | 0.18.1 |
keras | 2.10.0 |
Keras-Preprocessing | 1.1.2 |
kiwisolver | 1.4.4 |
libclang | 14.0.6 |
Markdown | 3.4.1 |
MarkupSafe | 2.1.1 |
matplotlib | 3.6.0 |
matplotlib-inline | 0.1.6 |
mpmath | 1.2.1 |
numpy | 1.23.3 |
oauthlib | 3.2.1 |
onnx | 1.12.0 |
onnx-simplifier | 0.4.1 |
onnxruntime | 1.12.1 |
opencv-python | 4.6.0.66 |
opt-einsum | 3.3.0 |
packaging | 21.3 |
pandas | 1.1.4 |
parso | 0.8.3 |
pickleshare | 0.7.5 |
Pillow | 7.1.2 |
pip | 22.2.2 |
pip-search | 0.0.12 |
prompt-toolkit | 3.0.31 |
protobuf | 3.19.5 |
psutil | 5.9.2 |
pure-eval | 0.2.2 |
pyasn1 | 0.4.8 |
pyasn1-modules | 0.2.8 |
Pygments | 2.13.0 |
pyparsing | 3.0.9 |
pyreadline3 | 3.4.1 |
python-dateutil | 2.8.2 |
pytz | 2022.2.1 |
PyYAML | 6.0 |
requests | 2.28.1 |
requests-oauthlib | 1.3.1 |
rich | 12.6.0 |
rsa | 4.9 |
scipy | 1.9.1 |
seaborn | 0.12.0 |
setuptools | 65.3.0 |
six | 1.16.0 |
soupsieve | 2.3.2.post1 |
stack-data | 0.5.0 |
sympy | 1.11.1 |
tensorboard | 2.10.0 |
tensorboard-data-server | 0.6.1 |
tensorboard-plugin-wit | 1.8.1 |
tensorflow-cpu | 2.10.0 |
tensorflow-estimator | 2.10.0 |
tensorflow_intel | 2.10.0 |
tensorflow-io-gcs-filesystem | 0.27.0 |
termcolor | 2.0.1 |
thop | 0.1.1.post2209072238 |
torch | 1.8.0 |
torchvision | 0.9.0 |
tqdm | 4.64.0 |
traitlets | 5.4.0 |
typing_extensions | 4.3.0 |
urllib3 | 1.26.12 |
wcwidth | 0.2.5 |
Werkzeug | 2.2.2 |
wheel | 0.37.1 |
wrapt | 1.14.1 |
zipp | 3.8.1 |
How can I fix the problem?