I am using inception v1 architecture for transfer learning. I have downloded the checkpoints file, nets, pre-processing file from the below github repository
https://github.com/tensorflow/models/tree/master/slim
I have 3700 images and pooling out the last pooling layer filters from the graph for each of my image and appending it to a list . With every iteration the ram usage is increasing and finally killing the run at around 2000 images. Can you tell me what mistake I have done ?
Even if I remove the list appending and just trying to print the results. this is still happening. I guess the mistake is with the way of calling the graph. When I see my ram usage , with every iteration it is becoming heavy and I don't know why this is happening as I am not saving anything nor there is a difference between 1st iteration
From my point, I am just sending one Image and getting the outputs and saving them. So it should work irrespective of how many images I send.
I have tried it on both GPU (6GB) and CPU (32GB).