I'm having some difficulty understanding the flow of cells in stacked LSTM network. I have this network:
def make_model(x_train):
# Create a new linear regression model.
model = Sequential()
model.add(Bidirectional(LSTM(units=30, return_sequences=True, input_shape = (x_train.shape[1],1) ) ))
model.add(Dropout(0.2))
model.add(LSTM(units= 30 , return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units= 30 , return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units= 30))
model.add(Dropout(0.2))
model.add(Dense(units = n_future,activation='linear'))
model.compile(optimizer='adam', loss='mean_squared_error',metrics=['acc'])
return model
1)Does the input from the 1st LSTM layer goes to the second LSTM layer?
2)I have read that in LSTMs, we have the previous hidden state and the current input as inputs. If the input from the 1st LSTM layer (input_shape) doesn't go to the 2nd LSTM layer, what is the input from of the 2nd LSTM layer? only the hidden state? which hidden state?