I started to use Theano library, because I am seek of compile & debugging with C++ with caffe ( though, it was a really great library :) )
Anyway, I made deep network(almost like CNN), with lasagne, and I started learning my network. However, my nvidia-smi shows that the memory usage keep fluctuating and I feel bad about it. It was not shown when I used caffe, and, because of this, learning could be slow.
I used multiprocess module to fetch dataset in advance, and my queue status seems right, so loading a dataset could not be the case for my slow training.
I used T.shared to allocate memory in GPU, in advance and make function with given variables.
Any ideas?
Thanks! Happy learning!