1

I am trying to build a sequence to sequence model in Keras using LSTM and dense neural network. The encoder encodes the input, the encoded state and the inputs are then concatenated and fed into a decoder which is an lstm + dense neural network which outputs categorical labels in time. Below is what my code looks like

from keras.utils import to_categorical
from keras.layers import Embedding, Bidirectional, GRU, Dense, TimeDistributed, LSTM, Input, Lambda
from keras.models import  Sequential, Model
import numpy as np
from keras import preprocessing
import keras

encoder_inputs_seq = Input(shape=(114,))
encoder_inputs = Embedding(input_dim= 1000 + 1, output_dim = 20)(encoder_inputs_seq)

x, state_h, state_c = LSTM(32, return_state=True)(encoder_inputs)
states = [state_h, state_c]

decoder_lstm = LSTM(32, return_sequences=True, return_state=True)
decoder_dense = Dense(9, activation='softmax')

all_outputs = []

input_state = keras.layers.RepeatVector(1)(state_h)


for i in range(5):
    # Run the decoder on one timestep
    new_input = keras.layers.concatenate([input_state, keras.layers.RepeatVector(1)(encoder_inputs[:, 1, :])], axis = -1)

    outputs, state_h, state_c = decoder_lstm(new_input,
                                             initial_state=states)
    outputs = decoder_dense(outputs)
    # Store the current prediction (we will concatenate all predictions later)
    all_outputs.append(outputs)
    # Reinject the outputs as inputs for the next loop iteration
    # as well as update the states
    states = [state_h, state_c]
    input_state = keras.layers.RepeatVector(1)(state_h)

decoder_outputs = Lambda(lambda x: keras.layers.concatenate(x, axis=1))(all_outputs)

model = Model(encoder_inputs_seq, decoder_outputs)

model.summary()

I run into the following exception

AttributeError: 'NoneType' object has no attribute '_inbound_nodes'

Where am I going wrong here?

Abdul Rahman
  • 1,294
  • 22
  • 41

1 Answers1

0

The problem is that you are slicing a tensor (encoder_inputs[:, 1, :]) without wrapping it in a Lambda layer. Every operation you do in a Keras model has to be in a layer. You can fix it by replacing your first line of code inside the for-loop with the following:

slice = Lambda(lambda x: x[:, 1, :])(encoder_inputs)
new_input = keras.layers.concatenate(
    [input_state, keras.layers.RepeatVector(1)(slice)], 
    axis = -1)
Anna Krogager
  • 3,528
  • 16
  • 23