1

I am trying to calculate the accuracy of my network with KL divergence. The prediction is a k-dimensional probability vector, which shall be compared against a gold-standard probability distribution of the same dimensionality. I tried this:

corr_subj_test = tf.contrib.distributions.kl(pred_subj, y)
accr_subj_test = tf.reduce_mean(corr_subj_test)

But eventually get the following error:

NotImplementedError: No KL(dist_a || dist_b) registered for dist_a type Tensor and dist_b type Tensor

timbmg
  • 3,192
  • 7
  • 34
  • 52

1 Answers1

1

Checking the tensorflow github and some other Issues that give the same NotImplementedError error (like this one) it seems that the kl() method does not currently accept that specific combination of parameter types.

If it is possible, you could pass your data to kl() in a data type it accepts (maybe transforming your data to achieve so).**

You could also try post it on tensorflow issues to discuss about your problem.

** Edit:

As suggested and explained by the answer in this question, you can obtain your desired result by using Cross Entropy instead with the softmax_cross_entropy_with_logits method, like this:

newY = pred_subj/y
crossE = tf.nn.softmax_cross_entropy_with_logits(pred_subj, newY)
accr_subj_test = tf.reduce_mean(-crossE)
DarkCygnus
  • 7,420
  • 4
  • 36
  • 59