7

I am trying to train a seq2seq translator using Keras functional API. The following code works fine:

encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)

I now want to try the bidirectional LSTM. My try:

encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = Bidirectional(LSTM(latent_dim, return_state=True))
encoder_outputs, state_h, state_c = encoder(encoder_inputs)

This returns an error:

ValueError                                Traceback (most recent call last)
<ipython-input-25-6ae24c1319f3> in <module>()
  6 encoder = Bidirectional(LSTM(latent_dim, return_state=True))
  7 print(len(encoder(encoder_inputs)))
----> 8 encoder_outputs, state_h, state_c = encoder(encoder_inputs)
  9 
 10 # We discard `encoder_outputs` and only keep the states.

ValueError: too many values to unpack (expected 3)

How do I extract state c and h from a bidirectional LSTM?

Ioannis Nasios
  • 8,292
  • 4
  • 33
  • 55
Mauro Gentile
  • 1,463
  • 6
  • 26
  • 37

0 Answers0