0

I have a dataset with 200000 samples. I am using the train_test_split from Sklearn.

model = Sequential()
model.add(Embedding(50000,128, input_length=14))
model.add(LSTM(16, return_sequences=True, dropout=0.3, recurrent_dropout=0.2))
model.add(LSTM(16, dropout=0.3, recurrent_dropout=0.2))

model.add(Dense(1, activation='sigmoid'))
model.compile(loss='mean_squared_logarithmic_error', optimizer='Adam', metrics=['accuracy'])

I got a low accuracy = 0.39.

Can I know what I am doing wrong here?

Laurel
  • 5,965
  • 14
  • 31
  • 57
dyro
  • 1

2 Answers2

0

Try adding more fully connected layers between the LSTM and the last layer

model = Sequential()
model.add(Embedding(50000,128, input_length=14))
model.add(LSTM(16, return_sequences=True, dropout=0.3, recurrent_dropout=0.2))
model.add(LSTM(16, dropout=0.3, recurrent_dropout=0.2))
####model.add(Dense(10))####
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='mean_squared_logarithmic_error', optimizer='Adam', metrics=['accuracy'])
user239457
  • 1,766
  • 1
  • 16
  • 26
-1

Low is relative. How much accuracy do you expect, and which are your baseline model(s) for comparison?

Also, why did you pick these specific values for your hyper-parameters? Have you tried searching for optimal hyper-parameters?

Inon Peled
  • 691
  • 4
  • 11