1

I have trained the classification model on Nvidia GPU and saved the model weights(checkpoint.pth). If I want to deploy this model in jetson nano and test it.

Should I convert it to TenorRT? How to convert it to TensorRT?

I am new to this. It would be helpful if someone can even correct me.

Konda
  • 21
  • 1
  • 4

2 Answers2

1

The best way to achieve the way is to export the Onnx model from Pytorch. Next, use the TensorRT tool, trtexec, which is provided by the official Tensorrt package, to convert the TensorRT model from onnx model.

You can refer to this page: https://github.com/NVIDIA/TensorRT/blob/master/samples/opensource/trtexec/README.md

The TRTEXEC is a more native tool that you can take it from NVIDIA NGC images or downloading from the official website directly.

If you use a tool such as torch2trt, it is easy to encounter the operator issue and complicated to resolve it indeed (if you are not familiar to deal with plugin issues).

chiehpower
  • 77
  • 5
0

You can use this tool:

https://github.com/NVIDIA-AI-IOT/torch2trt

Here are more details how to implent a converter to a engine file:

https://github.com/NVIDIA-AI-IOT/torch2trt/issues/254

User_12399
  • 113
  • 13