2

I am trying to incorporate a simple LSTM autoencoder mentioned in the keras.io website with a sequence input. It is throwing an error at the LSTM layer input.

from keras.layers import Input, LSTM, RepeatVector
from keras.models import Model
import numpy as np

def autoencoder(timesteps,input_dim):
    inputs = Input(shape=(timesteps, input_dim))
    encoded = LSTM(300)(inputs)

    decoded = RepeatVector(timesteps)(encoded)
    decoded = LSTM(input_dim, return_sequences=True)(decoded)

    encoder = Model(inputs, encoded)
    encoder.compile(optimizer='adam',loss='mse')
    return encoder

sequence = np.array([522,76,2,35,387,13,121,144,98,33,400]).reshape((1,11,1))
model = autoencoder(11,1)
model.fit(sequence,sequence,epochs=100,batch_size=4,verbose=1)

The error:

ValueError: Error when checking target: expected lstm_29 to have 2 dimensions, but got array with shape (1, 11, 1)

giser_yugang
  • 6,058
  • 4
  • 21
  • 44
  • The shape of the model output is `encoded.shape=(None, 300)`. But the shape of target is `sequence=(1,11,1)`. Maybe you want to use `encoder = Model(inputs, decoded)`? – giser_yugang May 21 '19 at 13:10
  • @giser_yugang Thanks, it worked !! Can you please suggest how I can get more accurate results ? The numbers I'm getting at the output are totally different from original. Even adding more LSTM layers at the encoder side and increasing the number of neurons didn't help. – Kishore Suren May 21 '19 at 14:18
  • Improving accuracy is too broad because it relies on specific scenarios and training data. Maybe you should standardize your input data first. – giser_yugang May 22 '19 at 09:30

0 Answers0