I'm experimenting with Keras and I am trying to create both a regular neural network and an LSTM neural network, each having one input layer (2000 inputs), one hidden layer (256 nodes), and one output layer (1 node). Trying to follow guides in Keras documentation, this is how I've done it:
Regular neural network:
model = Sequential()
model.add(Dense(2000, input_shape = (2000,), activation = 'sigmoid'))
model.add(Dense(256, activation = 'sigmoid'))
model.add(Dense(1, activation = 'sigmoid'))
Long short-term memory:
model = Sequential()
model.add(Embedding(2000, 256))
model.add(LSTM(256, activation = 'tanh', dropout = 0.2, recurrent_dropout = 0.2))
model.add(Dense(1, activation = 'sigmoid'))
As you can see, for the LSTM network I've used an Embedding layer as the input layer. Is this possible to avoid? I don't quite understand why one would want to use an embedding layer from Reading the Keras documentation, but that's the only way I could get the LSTM network working..
However, the final test accuracy of these two network models differ quite much even though the exact same data is used in evaluation. As an example, the LSTM gives around 60% accuracy, while the regular net gets about 90%.
Is this due to the use of different types of layers, and can I use a dense layer as input layer even though I have an LSTM layer next?
Currently, when I try using a dense layer Before the LSTM layer, I get the error:
ValueError: Input 0 is incompatible with layer lstm_1: expected ndim=3, found ndim=2
This is what I tried:
model = Sequential()
model.add(Dense(2000, input_shape = (2000,), activation = 'sigmoid'))
model.add(LSTM(256, activation = 'tanh', dropout = 0.2, recurrent_dropout = 0.2))
model.add(Dense(1, activation = 'sigmoid'))
What I really would like to achieve is one model that is a very simple regular neural network (non-recurrent), and one model that is a pure LSTM neural network. One input layer, one hidden layer, and one output layer. Both models should have the same number of nodes.