0

I would like to add a Pretrained Word Embedding to my Encoder-Decoder. Below is my code:

# Define an Encoder
encoder_inputs = Input(shape=(None, nEncoderToken))
encoder = LSTM(embedding_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
encoder_states = [state_h, state_c]

# Define a Decoder
decoder_inputs = Input(shape=(None, nDecoderToken))
decoder_lstm = LSTM(embedding_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
decoder_dense = Dense(nDecoderToken, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)

model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

I tried many ways but I just couldn't get it.

Adam L
  • 25
  • 5
  • It looks like keras to me. I suppose, you could simply pass the loaded embeddings matrix to the `embeddings_initializer` or `weights` parameter in the Embeddings layer placed just before LSTM layer, instead of Input layer? – LemurPwned Jul 28 '20 at 09:38
  • When I try doing that, I get dimension issues. Either, `Error when checking input: expected input_0 to have 2 dimensions, but got array with shape (100, 50, 780)` Or, `Input 0 is incompatible with layer lstm_0: expected ndim=3, found ndim=4` – Adam L Jul 28 '20 at 11:16
  • If I remove the Input layer, then I get the Error, `Layer lstm_0 was called with an input that isn't a symbolic tensor.` So I am a little stumped ~ – Adam L Jul 28 '20 at 11:22
  • Simplest solution, I think, would be to use the the sequential model (in general), but if you'd like to stick to this format, take a look at the answer here https://stackoverflow.com/questions/48850424/building-an-lstm-net-with-an-embedding-layer-in-keras and specifically example linked there https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html – LemurPwned Jul 28 '20 at 11:26
  • Are you still looking for a solution? – Vahid Feb 16 '21 at 02:18

0 Answers0