0

I have just started to work with tensorflow. I'm following the tutorial on neural machine translation given by Thang Luong, Eugene Brevdo, Rui Zhao here

https://www.tensorflow.org/tutorials/seq2seq

I want to use the model as an auto-encoder but I was wondering how we can get the encoder hidden states at inference time?

Any help would be much appreciated.

Peter Pan
  • 23
  • 6

1 Answers1

0

If you are using the dynamic encoder architecture defined in the NMT tutorial:

# Build RNN cell
encoder_cell = tf.nn.rnn_cell.BasicLSTMCell(num_units)

# Run Dynamic RNN
#   encoder_outputs: [max_time, batch_size, num_units]
#   encoder_state: [batch_size, num_units]
encoder_outputs, encoder_state = tf.nn.dynamic_rnn(
    encoder_cell, encoder_emb_inp,
sequence_length=source_sequence_length, time_major=True)

Then doing a sess.run([encoder_state], feed_dict={...})[0] will return the encoder hidden state of the final node. If you want states from all of the nodes, I would refer to this Stack Overflow question.

Dan Salo
  • 163
  • 7