3

I wanted to retrain the fully connected layers of VGG 16 for big gray level images (1800x1800), using Keras with Then backend.

So I've:

  • created a new VGG with a single color channel and loaded the weights from of the original VGG.
  • add trainable=False to all the convolution layers (the pooling and padding are not trainable by definition)
  • delete the two first dense layers to keep only the output layer with two neurons
  • increase drastically the max pooling dimensions and strides because I work with inputs 1800x1800 (no choice). The dimensions drop very quickly to match the original VGG dimensions.
  • reduce the batch size in order to reduce the memory required.

But when I start the training, I face a CNMEM_STATUS_OUT_OF_MEMORY error. I use NVIDIA K40, so I have 12Go of memory.

Any idea how to fix it?

stop-cran
  • 4,229
  • 2
  • 30
  • 47
FiReTiTi
  • 5,597
  • 12
  • 30
  • 58
  • 2
    you should include the output of model.summary(), it is quite likely that the model is just too big to fit into VRAM. You can always try reducing the batch size until it fits into RAM. – Dr. Snoopy Jun 08 '17 at 00:50
  • The model is the classic VGG with smaller dense layers, so at the end smaller than the classic VGG. I've already reduced the batch size to the minimum possible. – FiReTiTi Jun 08 '17 at 06:48
  • Try reducing your batch size and make sure that your inputs are of reasonable size, like 224x224 in the original VGG – enterML Jun 08 '17 at 12:15
  • @Nain: the batch size is already minimum and I cannot work with smaller images :-( – FiReTiTi Jun 08 '17 at 16:58

0 Answers0