3

I'm dealing with deep learning and medical image classification with it. I use brain MRI data and convert them into jpg. Then VGG16 is used for training. When I check the loss, accuracy, validation loss and validation accuracy, I see the graphs below.

Accuracy Loss Validation Accuracy Validation Loss

The accuracy and val_accuracy stuck at some iteration. When I augment the data with rotation in different angles, the result is similar. How can I get rid of it? Is that because of VGG16 model or my dataset? I also add my model's graph from tensorboard, you can check.

It's about my thesis and I couldn't find helpful information after days spent on research. This site is my last hope. Thanks in advance.

Model Graph

umitkilic
  • 327
  • 1
  • 4
  • 17

1 Answers1

2

Accuracy and loss graphs for training data as well as validation data are almost identical which suggests that you are not overfitting on the data, which is desirable. Providing more data by rotating the images will help in reducing the overfitting but will not improve your training accuracy. You should try to train with data augmentation techniques if you are overfitting not when the accuracy on training data is low. Since your training accuracy is relatively on the lower side, probably the network does not have enough layers to capture the complex relation between your image and output. So you should try to increase the model complexity by trying new architecture which has more layers. Perhaps VGG 19 will help.

While training your machine learning model, you follow this approach.

  1. Check your training error. If it is higher than increase model complexity. By model complexity for your traditional ml models you increase the number of features in the training. For image based CNN, you do that by increasing the number of CNN layers and/or increasing number of filters in each of the CNN.

  2. Check your validation error. if it is considerably more but your training error is less, model is overfitting the data. You use techniques such as drop out, batch normalization and more training data to make validation error as close to training error as possible.

You keep repeating the two steps until you get desired validation error.