I'm trying to use exact match / subset accuracy as a metric for my Keras model. I understand basically how it's supposed to work, but I'm having a hard time with the tensor manipulation.
I'm working on a multilabel classification task with 55 possible labels. I'm considering an output > 0.5 to be a positive for that label. I want a metric that describes how often the output exactly matches the true labels.
My approach is to convert y_true
to tf.bool
, and y_pred > 0.5
to tf.bool
, and then return a tensor containing True
if they match exactly, and False
otherwise. It appears to be working when I do basic tests, but when I train the model, it stays at 0.0000
without ever changing.
def subset_accuracy(y_true, y_pred):
y_pred_bin = tf.cast(y_pred > 0.5, tf.bool)
equality = tf.equal(tf.cast(y_true, tf.bool), y_pred_bin)
return tf.equal(
tf.cast(tf.math.count_nonzero(equality), tf.int32),
tf.size(y_true)
)
I am expecting to see the metric slowly climb, even if it only goes up to 50% or something. But it's staying at 0.0.