0

I am doing a semantic segmentation task using tensorflow. I have 5 classes, I calculate the loss like this:

loss = tf.reduce_mean((tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=tf.squeeze(annotation, squeeze_dims=[3]), name="entropy")))

logits has shape (batch_size, picture_height, picuture_width, 5)

annotation has shape (batch_size, picture_height, picuture_width, 1)

Now I only want to calculate the loss of the first 4 classes, ignore the 5th class. How can I achieve this?

For example, if I only want to calulate the Cohen's kappa of the first 4 classes, I can set the labels parameter in sklearn.metrics.cohen_kappa_score:

kappa = cohen_kappa_score(y_true, y_pred, labels=[0,1,2,3])
xiaopl
  • 1
  • 1
  • So what you mean is, you compute the softmax with the five logit values, then the cross-entropy, but wherever the class was 5 just mask that value? And then the mean should be done over all the elements, zeroing the 5 class cross-entropy, or only over the non-5 elements? – jdehesa Mar 16 '18 at 12:19
  • Just compute the cross entropy loss of non-5 pixels. – xiaopl Mar 17 '18 at 12:02

1 Answers1

0

You can use not sparse version of cross entropy loss that accepts one-hot labels tf.losses.softmax_cross_entropy, and manually create one-hot vector using tf.one_hot.

It accepts depth argument that allows to use only first labels, or you can just slice result one-hot encoded tensor before passing to the loss.

dm0_
  • 2,146
  • 1
  • 16
  • 22