2

My label looks like this:

label = [0, 1, 0, 0, 1, 1, 0]

In other words, classes 1, 4, 5 are present at the corresponding sample. I believe this is called a soft class.

I'm calculating my loss with:

logits = tf.layers.dense(encoding, 7, activation=None)

cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(
    labels=labels,
    logits=logits
)

loss = tf.reduce_mean(cross_entropy)

According to Tensorboard, the loss is decreasing over time, as expected. However, the accuracy is flat at zero:

eval_metric_ops = {
    'accuracy': tf.metrics.accuracy(labels=labels, predictions=logits),
}
tf.summary.scalar('accuracy', eval_metric_ops['accuracy'][1])

How do I calculate the accuracy of my model when using soft classes?

rodrigo-silveira
  • 12,607
  • 11
  • 69
  • 123
  • Is that a single label which might belong to a number of classes? – enterML Jan 27 '18 at 07:21
  • I think you are using one-hot encoding for your labels. Since this is a multi-class problem I would suggest to use the softmax_cross_entropy_with_logits instead of the sigmoid. Moreover, try to apply the softmax function on the logits and feed the output as the predictions to the accuarcy instead of the logits – jan bernlöhr Jan 27 '18 at 10:46

1 Answers1

0

Did you solve this? I think the comment about softmax_cross_entropy_with_logits is incorrect because you have a multi-label, (each label is a) binary-class problem.

Partial solution:

labels = tf.constant([1, 1, 1, 0, 0, 0])  # example
predicitons = tf.constant([0, 1, 0, 0, 1, 0])  # example
is_equal = tf.equal(label, predicitons)
accuracy = tf.reduce_mean(tf.cast(is_equal, tf.float32))

This gives a number but still need to convert it into a tf metric.

RonJRH
  • 129
  • 6