I am looking for end-to-end tutorial, how to convert my trained tensorflow model to TensorRT to run it on Nvidia Jetson devices. I know how to do it in abstract (.pb -> ONNX - > [Onnx simplifyer] -> TRT engine), but I'd like to see how other do It, because I had no speed gain after converting, maybe i did something wrong. I can't believe that there is no pipeline with steps description in the internet. That's why I am asking here, maybe you have seen such a tutorial...
Asked
Active
Viewed 865 times
3
-
Does this help https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html. Thanks! – Apr 27 '21 at 06:14
-
Yeah, right, these instructions describes exactly what I want to do, but wanted more detailed tutorial for newbies. Have already converted the model. After I implement the TRT inference I'll make the tutorial and post It here. Thanks for your reply – Pavlo Sharhan Apr 28 '21 at 12:30
-
here is the answer. I added the code for inference on video too. https://github.com/pavloshargan/TF-TRT-Pytorch-TRT_tutorial – Pavlo Sharhan May 06 '21 at 17:28