I have a LSTM model (keras) that receives as input the past 20
values of 6
variables and predicts the future 4
values for 3
of those variables. In other words, I have 6 time series and I'm trying to predict the future values of them using their 20 past values. The basic code is:
past_time_steps = 6
future_time_steps = 4
inputs = Input(shape=(20,past_time_steps))
m = LSTM(hid, return_sequences=True)(inputs)
m = Dropout(0.5)(m)
m = LSTM(hid)(m)
m = Dropout(0.5)(m)
outputA = Dense(future_time_steps, activation='linear', W_constraint=nonneg())(m)
outputB = Dense(future_time_steps, activation='linear', W_constraint=nonneg())(m)
outputC = Dense(future_time_steps, activation='linear', W_constraint=nonneg())(m)
m = Model(inputs=[inputs], outputs=[outputA, outputB, outputC])
m.compile(optimizer='adam', loss='mae')
m.fit(x,[y1,y2, y2])
So, the input is a numpy matrix with shape (500,20,6)
where 500
represents the number of samples (e.g. training time series).
Now, I have new data available, so for each time series I have a categorical variable (that can takes 6 values: 0,1,2,3,4,5
). How can I add this information to the model? Can I add another layer that uses this variable? Should I pad this variable at the beginning/end of the time series so that I'd have an input matrix with shape (500,21,6)
?