0

I trained ssd mobilenet v1 on custom dataset now I want to run in jetson, I converted it to frozen graph pb file using tensorflow object detection api, i want to run this model on jetson nano, but I eats 2.5 GB of RAM, while model size is only 22.3MB. I tried with Tensorrt FP16 conversion and still same memory consumption.

I need that model in the size of 5 to 6 MB or at least it must consume less memory on inference.

  • Have you tried converting it into TRT format and running it? Nano has native support for TRT models and it gives a better performance in terms of FPS – vman Aug 27 '20 at 17:19
  • Yes i converted into TRT but takes same memory and same resources, finaly I had to convert it inot tflite – Rajneesh Verma Aug 28 '20 at 18:14

0 Answers0