I am in the process of implementing ResNet50 model on medical images of four classes. I intially had a dataset of 250 images per class and I split them into 2 folders train and val and used val data as test dataset and 80% of train as training dataset and 20% as validation dataset.
I tried a code which I have found online. I have experimented a lot but nothing improves the validation dataset accuracy although the training accuracy is decent, which can be improved later.
Please suggest me ways to improve the validation accuracy with respect to my problem statemnt. PS: Although the images are black and white, I have used the input shape to be (224,224,3), because I couldn't translate the code I found for grayscale images. Hope that's not the major issue here.
Reference code: https://github.com/anujshah1003/Transfer-Learning-in-keras---custom-data/blob/master/transfer_learning_resnet50_custom_data.py
The only changes I did were changing the directory of datasets and also excluded the Flatten Layer in my code because avg_pool layer is also flattened so could directly apply the Dense layer.
last_layer = model.get_layer('avg_pool').output
out = Dense(num_classes, activation='softmax', name='output_layer')(last_layer)
custom_resnet_model = Model(inputs=image_input,outputs= out)
t=time.time()
hist = custom_resnet_model.fit(X_train, y_train, batch_size=32, epochs=12, verbose=1,
validation_data=(X_test, y_test))
print('Training time: %s' % (t - time.time()))
(loss, accuracy) = custom_resnet_model.evaluate(X_test, y_test, batch_size=10, verbose=1)
print("[INFO] loss={:.4f}, accuracy: {:.4f}%".format(loss,accuracy * 100))