I'm want to build a sequence to sequence auto encoder for signal compression. I wanted to start with a std, LSTM based auto encoder. However, Keras complains about my model. any hint what I'm doing wrong
from keras.layers import Input, LSTM, RepeatVector
from keras.models import Model
timesteps = 10
input_dim = 4
latent_dim = 128
#Create the encoder:
inputs = Input(shape=(timesteps, input_dim))
encoded = LSTM(latent_dim)(inputs)
encoder = Model(inputs, encoded)
#Create the decoder:
decInput = Input(shape=(latent_dim))
decoded = RepeatVector(timesteps)(decInput)
decoded = LSTM(input_dim, return_sequences=True)(decoded)
decoder = Model(decInput,decoded)
#Joining models:
joinedInput = Input(shape=(timesteps, input_dim))
encoderOut = encoder(joinedInput)
joinedOut = decoder(encoderOut)
sequence_autoencoder = Model(joinedInput,joinedOut)
I get on the line encoded = LSTM(latent_dim)(inputs)
The error
TypeError: Expected int32, got list containing Tensors of type '_Message' instead.