In the decoder part of Seq2seq, it is like a language modeling to be given an input word and the hidden state, to predict the next word. How bidirectional information could be used in this mechanism? Also, is that we also have to generate sentence words next-by-nextly in bidirectional RNN decoder? Thank you.
Asked
Active
Viewed 171 times
1
-
I also have same question.Does someone knows about this or has implemented this – subho Apr 27 '19 at 12:22