In the sample seq2seq code given by fchollet, how can I add more LSTM layers to the encoder and decoder? I'm having some trouble with the shapes and a bit confused in general. Thanks.
Asked
Active
Viewed 727 times
1 Answers
4
Keras' functional api lets you call layers. This lets you chain another layer on top the output of an existing layer by calling it. For example here:
encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = LSTM(latent_dim, return_sequences=True)
encoder_outputs, state_h, state_c = LSTM(latent_dim, return_state=True)(encoder(encoder_inputs))

Primusa
- 13,136
- 3
- 33
- 53
-
I tried that and I'm getting this error - ValueError: Layer lstm_2 was called with an input that isn't a symbolic tensor. Received type:
. Full input: [ – S.Mandal Apr 14 '18 at 05:34]. All inputs to the layer should be tensors. -
Did you manage to get this working? I'm having same problem. Specifically during inference, how do we reconstruct decoder – Amel Music Sep 23 '18 at 09:25
-
@AmelMusic Yes, It works, if you do it carefully. I've defined something like: c1, h1, c2, h2, etc ... for 2 layers of LSTM. Also make sure you set "return_sequences" to True for first layers. Read the docs about "Model()" too, that helped me to understand what's going on. (Unfortunately my code was too long and confusing, so instead I decided to describe what I did, and It worked.) – Masood Lapeh Mar 02 '19 at 21:00