0

VGGNet is not learning while fine tuning.

I trained VGGnet 16 layer model on ECG data. after that i designed a new model taking conv_base of VGGnet and fully connected layers at top of that. The new-model is not learning at all. It is showing same accuracy and loss epoch by epoch. Later on I designed complete new_model (some variant of VGGNet) from scratch using Keras library, but this model is also not improving while training. What could be the possible reasons? Whatever model i train (which used to work perfectly well) giving the same 89.02% acc.

Model summary is Layer (type)    Output Shape         Param #     Connected to                     
input_1 (InputLayer)            (None, 1201, 1)         0            

input_2 (InputLayer)            (None, 401, 1)          0            

sequential_1 (Sequential)       (None, 2560)         8176064     input_1[0][0]                    
sequential_2 (Sequential)       (None, 25088)        49664       input_2[0][0]                    
concatenate_1 (Concatenate)     (None, 27648)        0           sequential_1[1][0], sequential_2[1][0]               
dense_1 (Dense)                 (None, 1024)         28312576    concatenate_1[0][0]              
dropout_1 (Dropout)             (None, 1024)         0           dense_1[0][0]                    
dense_2 (Dense)                 (None, 512)          524800      dropout_1[0][0]                  
dropout_2 (Dropout)             (None, 512)          0           dense_2[0][0]                    
dense_3 (Dense)                 (None, 256)          131328      dropout_2[0][0]                  
dropout_3 (Dropout)             (None, 256)          0           dense_3[0][0]                    
dense_4 (Dense)                 (None, 64)           16448       dropout_3[0][0]                  
dense_5 (Dense)                 (None, 2)            130         dense_4[0][0]

Training code

    from keras.optimizers import adam



    from keras.callbacks import ModelCheckpoint
    checkpointer = 
    ModelCheckpoint(filepath='modifiedVGGBasic.bestweights.hdf5', 
    verbose=1, monitor='val_acc',mode='max', \
                               save_best_only=True)
    earlystop = EarlyStopping(monitor='val_acc', min_delta=0.001, patience=50, \
                    verbose=2, mode='max', restore_best_weights=True)

    ecg_model.compile(optimizer='adam', loss='categorical_crossentropy',metrics=['acc'])

     result =ecg_model.fit([xt1r,xt2r],yt,validation_data=([xv1r,xv2r],yv), \
                  batch_size=128,class_weight=class_weights, \
                   epochs=150, verbose=2, callbacks=[earlystop, checkpointer])

Two epoch output is shown below. It gives the same 89.02 % accuracy over all the epochs and don't learn.

Train on 53819 samples, validate on 13455 samples

Epoch 1/150 - 916s - loss: 1.7631 - acc: 0.8866 - val_loss: 1.7705 - val_acc: 0.8902

Epoch 00001: val_acc improved from -inf to 0.89015, saving model to modifiedVGGBasic.bestweights.hdf5

Epoch 2/150 - 888s - loss: 1.7703 - acc: 0.8902 - val_loss: 1.7705 - val_acc: 0.8902

Sharky
  • 4,473
  • 2
  • 19
  • 27
M. Jangra
  • 29
  • 3
  • have you tried setting a different learning rate for the optimizer? is the input data normalized in any way? – SamProell Apr 02 '19 at 06:53
  • @samProell. Yes i have tried. Data is normalized using standard scaler. Issue is that, whatever model (which i trained earlier and used to work well) i try to train, gives the same 89% accuracy. – M. Jangra Apr 02 '19 at 07:10
  • You should use plain SGD, Adam does not work well with VGG. – Dr. Snoopy Apr 02 '19 at 07:12
  • and you are using new training data, that the model has not seen before? – SamProell Apr 02 '19 at 07:33

0 Answers0