0

I am working with FER2013Plus dataset from https://github.com/Microsoft/FERPlus which contains the fer2013new.csv file. This file contains labels for each image in the dataset. An example on labels could be:

(4, 0, 0, 2, 1, 0, 0, 3)

where each dimension is a different emotion. Finally, in their paper https://arxiv.org/pdf/1608.01041.pdf, they converted the labels distribution into probabilities => the new label would become

(0.5, 0, 0, 0.25, 0.125, 0, 0, 0.375)

In other words, the person in the image is happy with probability of 0.5, sad with probability of 0.25 and so on... And the sum of of the probabilities is 1.

Now while training I used tf.nn.softmax_cross_entropy_with_logits_v2 to calculate the loss between my predictions and the labels. Now how to compute the accuracy?

Any help is much appreciated!!

I. A
  • 2,252
  • 26
  • 65

1 Answers1

2

Here is an excerpt from the paper:

"We take the majority emotion as the single emotion label, and we measure prediction accuracy against the majority emotion."

They are using a discrete classification task. So you just need to take the tf.argmax() on your logits to get the highest probability, and then compare that with the tf.argmax() of the labels.

For example, if your label is (0.5, 0, 0, 0.25, 0.125, 0, 0, 0.375), then happy is the majority emotion, so you would check if your logits had happy as the majority emotion as well.

Naomi
  • 713
  • 1
  • 6
  • 17
  • Thank you @Jordan for your help. But I guess that they have mentioned in the paper that you can compute the loss in different ways and not just by considering the majority vote. You have cross entropy, probabilistic label drawing and so on... – I. A Oct 18 '18 at 17:40
  • 1
    Sorry, I just skimmed the paper really quick to find that. Majority vote would definitely be the easier implementation, but maybe not the best implementation – Naomi Oct 18 '18 at 17:43
  • 1
    Actually now that I skimmed it again, it appears that for all four loss algorithms, in the end they made their accuracy computation majority vote. They probably did this to be consistent. You could come up with an accuracy computation that is not majority vote if you wanted. – Naomi Oct 18 '18 at 17:56