I want to train an stateful LSTM network using the functional API in Keras.
The fit method is fit_generator
.
I am able to train it, using: batch_size = 1
My Input layer is:
Input(shape=(n_history, n_cols),batch_shape=(batch_size, n_history, n_cols),
dtype='float32', name='daily_input')
The generator is as follows:
def training_data():
while 1:
for i in range(0,pdf_daily_data.shape[0]-n_history,1):
x = f(i)() # f(i) shape is (1, n_history, n_cols)
y = y(i)
yield (x,y)
And then the fit is:
model.fit_generator(training_data(),
steps_per_epoch=pdf_daily_data.shape[0]//batch_size,...
This works and trains well, however, very slow and performing a gradient update at every time step since batch_size = 1
How, within this configuration, can I set a batch_size > 1
?
remember: the LSTM layer has stateful = True