I am using YOLACT++ and I want to deploy it using C++. Originally the model is saved as yolact.pth file and I realized from the issues that I couldn't directly convert .pth to .pt file that can be called in C++ see this issue and I would need to do convert it first to ONNX which gives out yolact.onnx as the output. I converted it to ONNX using this solution in the issues and in the comments below this one you can see the terminal commands he uses to achieve the yolact.onnx converted model from yolact.pth.
So my issue is I don't really understand what to do next to deploy it using C++. Should I be looking for solutions to convert from yolact.onnx to yolact.pt or can yolact.onnx be called from C++? There's an interesting direction given in one of the issues here for this work flow: Pytorch->ONNX->NCNN which he tested with C++ inference on ARM device. Is this what I am looking for. I am not very familiar with C++ so I dont know the direction.
I also tried adding this to end of of eval.py where model is being called:
sm = torch.jit.script(net)
sm.save("Yolact.pt")
and I got error:
torch.jit.frontend.UnsupportedNodeError: with statements aren't
supported: (line 570 in yolact.py)
and line 570 is:
with timer.env('backbone'):
I commented it out and it gave error on the next "timer.env" in line 574 and so on!