-1

I was training a Densnet121 architecture on about 102 flower classes.

The data was about 10-20 images in training, valid and testing set for each of the 102 classes.

I did add some dropout about 0.5, and I noticed that training accuracy is about 70% and validation accuracy is 94%.

Please let me know what should I do next since it does not classify as a high variance problem as per my understanding, and if I try to work on fitting the training data really well (working on bias), I am afraid that it will affect my ability to fit the validation data well, as I am getting a 94% accuracy there, so I don't want to hurt it.

Grant Miller
  • 27,532
  • 16
  • 147
  • 165
  • Please go through https://stackoverflow.com/help/how-to-ask before asking the questions, I do prefer you to put this question in the DataScience community. – EMKAY Oct 06 '18 at 10:15
  • 1
    @EMKAY "I do prefer" is not a suitable expression here... – desertnaut Oct 06 '18 at 20:11
  • 1
    Your question is way too vague & broad, and your chosen title is inconsistent with your post... – desertnaut Oct 06 '18 at 20:12
  • @EMKAY While the question needs a lot more clarification, those questions are more or less at the intersection of the datascience community and stackoverflow since the problems that they stem from could be implementation related and not just theoretical, so they are at home here. – Ash Oct 06 '18 at 21:34

1 Answers1

-1

The data was about 10-20 images in training, valid and testing set for each of the 102 classes.

I would try to reduce the valid and test set to be 5 images per class and give it to run more training iterations. Also, try to use a dropout about 0.1-0.2

John
  • 143
  • 6