0

How can tfa.seq2seq.BeamSearchDecoder, for example, be used with a simple encoder-decoder architecture? Suppose the task is machine translation, where the encoder returns a vector representation of the input sequence. The decoder uses Embedding, LSTM and Dense layers to translate the text word by word. It gets an error "Argument 'cell' (<keras.layers.rnn.lstm.LSTM object at 0x000002658BF13C40>) is not RNNCell: property 'output_size' is missing, property 'state_size' is missing." when I try to set:

beam_search_decoder = tfa.seq2seq.BeamSearchDecoder(
    cell= model.decoder.lstm,

There are very few sources and the only example I found uses the attention mechanism. How should I create a beam search decoder based on a simple decoder with LSTM layer?

  • You might want to post this to https://datascience.stackexchange.com. It's a portal dedicated to machine learning and deep learning related queries. – Azhar Khan Jan 03 '23 at 09:31

0 Answers0