I have this legacy code that was inmplemented in tensorflow 1.0.1. I want to convert the current LSTM cell to bidirrectional LSTM.
with tf.variable_scope("encoder_scope") as encoder_scope:
cell = contrib_rnn.LSTMCell(num_units=state_size, state_is_tuple=True)
cell = DtypeDropoutWrapper(cell=cell, output_keep_prob=tf_keep_probabiltiy, dtype=DTYPE)
cell = contrib_rnn.MultiRNNCell(cells=[cell] * num_lstm_layers, state_is_tuple=True)
encoder_cell = cell
encoder_outputs, last_encoder_state = tf.nn.dynamic_rnn(
cell=encoder_cell,
dtype=DTYPE,
sequence_length=encoder_sequence_lengths,
inputs=encoder_inputs,
)
I found some examples out there. https://riptutorial.com/tensorflow/example/17004/creating-a-bidirectional-lstm
But I cannot convert my LSTM cell to bidirectional LSTM cell by reffering them. What should be put into state_below in my case?
Update: Apart from above issue I need to clarify how to convert following decoder network (dynamic_rnn_decoder) to use bidirectional LSTM. (The documentation does not give any clue about that)
with tf.variable_scope("decoder_scope") as decoder_scope:
decoder_cell = tf.contrib.rnn.LSTMCell(num_units=state_size)
decoder_cell = DtypeDropoutWrapper(cell=decoder_cell, output_keep_prob=tf_keep_probabiltiy, dtype=DTYPE)
decoder_cell = contrib_rnn.MultiRNNCell(cells=[decoder_cell] * num_lstm_layers, state_is_tuple=True)
# define decoder train netowrk
decoder_outputs_tr, _ , _ = dynamic_rnn_decoder(
cell=decoder_cell, # the cell function
decoder_fn= simple_decoder_fn_train(last_encoder_state, name=None),
inputs=decoder_inputs,
sequence_length=decoder_sequence_lengths,
parallel_iterations=None,
swap_memory=False,
time_major=False)
Can anyone please clarify?