0

I am training some CNN on an image classification task. On a simple version this worked fine, but when I made the images more difficult I now encounter this phenomen (I let it train over night):

While training, the training crossentropy loss goes down. Also, on my test dataset the crossentropy loss goes down. I am further measuring accuracy on it, which behaves differently. In the beginning it went up, only do go down again, and then it was kind of wavering between 0.1 and 0.3. I was expecting the crossentropy loss and the accurary to be somewhat related - since they are both measured on the same, the test dataset.

Can somebody explain this to me? Or do I have a mistake in my code?

Thanks a lot

Gemini
  • 475
  • 3
  • 12

1 Answers1

0

The cross-entropy is not always directly related to the error metric. Usually it's correlated well enough to the error rate. Another typical choice is to minimize the Bayes risk. The Bayes risk is simply the expectation, with respect to your model, of the error (conversely accuracy). This is a continuous loss and should correlate better with your error rate. Also, measuring training error is usually a good metric to track.

drpng
  • 1,637
  • 13
  • 14