I have a model build which is shown below:
def og_build_model_5layer(n_rows,n_cols):
input = Input(shape=(n_cols,n_rows),NAME='INP')
print('model_input shape:' , input.shape)
c1 = Conv1D(50, 3,name = 'conv_1',padding='same',kernel_initializer="glorot_uniform")(INP)
b1 = BatchNormalization(name = 'BN_1')(c1)
a1 = Activation('relu')(b1)
c2 = Conv1D(50,3,name = 'conv_2',padding='same',kernel_initializer="glorot_uniform")(a1)
b2 = BatchNormalization(name = 'BN_2')(c2)
a2 = Activation('relu')(b2)
c3 = Conv1D(50, 3,name = 'conv_3',padding='same',kernel_initializer="glorot_uniform")(a2)
b3 = BatchNormalization(name = 'BN_3')(c3)
a3 = Activation('relu')(b3)
c4 = Conv1D(50, 3,name = 'conv_4',padding='same',kernel_initializer="glorot_uniform")(a3)
b4 = BatchNormalization(name = 'BN_4')(c4)
a4 = Activation('relu')(b4)
c5 = Conv1D(50, 3,name = 'conv_5',padding='same',kernel_initializer="glorot_uniform")(a4)
b5 = BatchNormalization(name = 'BN_5')(c5)
a5 = Activation('relu')(b5)
######## ADD one LSTM layer HERE ##################
fl = Flatten(name='fl')(LSTM_OUTPUT)
den = Dense(30,name='dense_1')(fl)
drp = Dropout(0.5)(den)
output = Dense(1, activation='sigmoid')(drp)
opt = Adam(learning_rate=1e-4)
model = Model(inputs=INP, outputs=output, name='model')
extractor = Model(inputs=ecg_input,outputs = model.get_layer('fl').output)
model.compile(optimizer=opt, loss='binary_crossentropy', metrics=['accuracy'])
print(model.summary)
return model,extractor
Here I have 5 Conv1D layers (each accepting one image) and I want to add one LSTM layer that would take a sequence of 200 images together, and I want to train this CNN+LSTM model end to end. I am confused about how I will add the LSTM layer as that needs a sequence (of 200 processed inputs) where as the previous 5 layers will accept one input at a time. Any help here is appreciated. I know the concept of timedistributed conv1D however I do not want to use it. can this end-to-end training be done ?