3

I'm new in NLP and I am trying to understand how to use pre-trained word embeddings like fastText with the existing Seq2Seq model. The Seq2Seq model I'm working with is the following. The encoder is simple and the decoder is Pointer Generator Network with CRF on the top. Both of them use an embedding layer.

The question: If I have my own dataset & vocab, how do I use both my own vocab and the one from the fastText? Do I have to use fastText weights in both the encoder and decoder?

sarah
  • 31
  • 1

0 Answers0