0

It may be a stupid question but:

I noticed that the choice of the loss function modifies the accuracy obtained during evaluation.

I thought that the loss was used only during training and of course from it depends the goodness of the model in making prediction but not the accuracy i.e amount of right predictions over the total number of samples.

EDIT

I didn't explain my self correctly.

My question comes because I recently trained a model with binary_crossentropy loss and the accuracy coming from model.evaluate() was 96%. But it wasn't correct! I checked "manually" and the model was getting 44% of the total predictions. Then I changed to categorical_crossentropy and then the accuracy was correct.

MAYBE ANSWER From: another question

I have found the problem. metrics=['accuracy'] calculates accuracy automatically from cost function. So using binary_crossentropy shows binary accuracy, not categorical accuracy. Using categorical_crossentropy automatically switches to categorical accuracy and now it is the same as calculated manually using model1.predict().

Francesco Pegoraro
  • 778
  • 13
  • 33
  • How exactly did you check manually? Because binary cross-entropy will change which accuracy metric is used, for binary CE keras will threshold/round the output to produce binary predictions. – Dr. Snoopy Sep 13 '18 at 13:06
  • I checked using `predict_classes` on the test set and calculating `correct_predictions/total_predictions`. And maybe it would work like you suggested using a precise metric, I was using `metrics=['accuracy']` instead I should have used `categorical accuracy` – Francesco Pegoraro Sep 13 '18 at 13:09

3 Answers3

1

Keras chooses the performace metric to use based on your loss funktion. When you use binary_crossentropy it although uses binary_accuracy which is computed differently than categorical_accuracy. You should always use categorical_crossentropy if you have more than one output neuron.

Sebastian E
  • 467
  • 1
  • 3
  • 15
0

The model tries to minimize the loss function chosen. It adjusts the weights to do this. A different loss function results in different weights.

Those weights determine how many correct predictions are made over the total number of samples. So it is correct behavior to see that the loss function chosen will affect the model accuracy.

jeffhale
  • 3,759
  • 7
  • 40
  • 56
  • I didn't explain my self correctly. My question comes because I recently trained a model with `binary_crossentropy` loss and the accuracy coming from model.evaluate() was 96%. But it wasn't correct! I checked "manually" and the model was getting 44% of the total predictions. Then I changed to `categorical_crossentropy` and then the accuracy was correct. – Francesco Pegoraro Sep 13 '18 at 12:59
0

From: another question

I have found the problem. metrics=['accuracy'] calculates accuracy automatically from cost function. So using binary_crossentropy shows binary accuracy, not categorical accuracy. Using categorical_crossentropy automatically switches to categorical accuracy and now it is the same as calculated manually using model1.predict().

Francesco Pegoraro
  • 778
  • 13
  • 33