I was training a Densnet121 architecture on about 102 flower classes.
The data was about 10-20 images in training, valid and testing set for each of the 102 classes.
I did add some dropout about 0.5, and I noticed that training accuracy is about 70% and validation accuracy is 94%.
Please let me know what should I do next since it does not classify as a high variance problem as per my understanding, and if I try to work on fitting the training data really well (working on bias), I am afraid that it will affect my ability to fit the validation data well, as I am getting a 94% accuracy there, so I don't want to hurt it.