-1

I have been working on Multitask model, using VGG16 with no dropout layers. I find out that the validation accuracy is higher than the training accuracy and validation loss is lesser than the training loss.

I cant seem to findout the reason to why is this happening in the model.

Below is the training plot:

enter image description here

Data:

I am using (randomly shuffled images) 70% train, 15% validation, 15% test, and the results on 15% test data is as follows:

enter image description here enter image description here

Do you think these results are too good to be true?

Khaine775
  • 2,715
  • 8
  • 22
  • 51
Obiii
  • 698
  • 1
  • 6
  • 26

1 Answers1

2

At the beginning, yes, but at the end you can see they sort of start changing places.

At the end of the training you are getting near an overfit point (if the val loss starts increasing or the val accurace starts decreasing, then you've reached overfitting)

But at the beginning, what can explain that behavior might be some data unbalance between training and test. Maybe you've got easier examples in the validation database, or a class unbalance, or more empty values, etc.

Daniel Möller
  • 84,878
  • 18
  • 192
  • 214