I trained ssd mobilenet v1 on custom dataset now I want to run in jetson, I converted it to frozen graph pb file using tensorflow object detection api, i want to run this model on jetson nano, but I eats 2.5 GB of RAM, while model size is only 22.3MB. I tried with Tensorrt FP16 conversion and still same memory consumption.
I need that model in the size of 5 to 6 MB or at least it must consume less memory on inference.