0

I am retraining InceptionV3 model on 200 images and I am using Adam optimiser:

opt = Adam(lr=0.0001, decay=0.0001 / 100)

I noticed the loss bounces specially the validation. I thought that is down to the learning rate as I saw in some answers like Transfer Learning - Val_loss strange behaviour, and also Why is it possible to have low loss, but also very low accuracy, in a convolutional neural network? they were not helpful.

so I used RMSprop, but I had the same behaviour. Here is how the performance looks like:

enter image description here

Any suggestions why am I experiencing this and how to tackle it?

owise
  • 1,055
  • 16
  • 28

1 Answers1

1

Looking at your graphs, I don't think the network is actually learning anything.

I suggest you look into the following:

  1. Is there any 0'ed input in the images.

  2. Are the gradients either too large or too small.

  3. Are the gradients almost constant across multiple batches.

  4. Are the scales of all images the same.

  5. Are the classes properly encoded as one-hot vectors.

Susmit Agrawal
  • 3,649
  • 2
  • 13
  • 29
  • what do you mean 0'ed input in the images? How to access gradients after training from the history of Keras? Yes, the images are all scaled the same Yes they are encoded properly –  owise Apr 04 '20 at 16:33
  • Sometimes if an image file is corrupted, your pipeline may simply feed an array of 0s to the network. This depends on the implementation of the pipeline. – Susmit Agrawal Apr 04 '20 at 20:22
  • As for gradients, I'd suggest switching to `tf.GradientTape()` – Susmit Agrawal Apr 04 '20 at 20:23