0

I am building a bidirectional LSTM to do multi-class sentence classification. I have in total 13 classes to choose from and I am multiplying the output of my LSTM network to a matrix whose dimensionality is [2*num_hidden_unit,num_classes] and then apply softmax to get the probability of the sentence to fall into 1 of the 13 classes.

So if we consider output[-1] as the network output:

W_output = tf.Variable(tf.truncated_normal([2*num_hidden_unit,num_classes])) result = tf.matmul(output[-1],W_output) + bias

and I get my [1, 13] matrix (assuming I am not working with batches for the moment).

Now, I also have information that a given sentence does not fall into a given class for sure and I want to restrict the number of classes considered for a given sentence. So let's say for instance that for a given sentence, I know it can fall only in 6 classes so the output should really be a matrix of dimensionality [1,6].

One option I was thinking of is to put a mask over the result matrix where I multiply the rows corresponding to the classes that I want to keep by 1 and the ones I want to discard by 0, by in this way I will just lose some of the information instead of redirecting it.

Anyone has a clue on what to do in this case?

user1718064
  • 475
  • 1
  • 9
  • 23

1 Answers1

0

I think your best bet is, as you seem to have described, using a weighted cross entropy loss function where the weights for your "impossible class" are 0 and 1 for the other possible classes. Tensorflow has a weighted cross entropy loss function.

Another interesting but probably less effective method is to feed whatever information you now have about what classes your sentence can/cannot fall into the network at some point (probably towards the end).

bnorm
  • 399
  • 2
  • 16