1

In the decoder part of Seq2seq, it is like a language modeling to be given an input word and the hidden state, to predict the next word. How bidirectional information could be used in this mechanism? Also, is that we also have to generate sentence words next-by-nextly in bidirectional RNN decoder? Thank you.

温志远
  • 11
  • 2

0 Answers0