1

I'm comparing two models, and want to clarify the weird results.

Model 1 achieves lower training loss than model 2, but get higher validation loss.

Because over-fitting and under-fitting are determined by comparing training/validation loss of themselves, therefore, I think it's not an issue of over-fitting.

Precisely, I'm now training with point cloud classification tasks,

got model 1 training loss : 1.51, test loss : 1.56 / model 2 training loss : 1.37, test loss : 1.58.

All other conditions are the same.

So the question is, how can this happens, test loss is lower than training loss?

it will be grateful anyone can help our problems.

Nilanka Manoj
  • 3,527
  • 4
  • 17
  • 48
김은석
  • 11
  • 3
  • I think you forgot to ask something, it is not clear what the problem is. – Dr. Snoopy Apr 01 '20 at 17:17
  • You may have reached your happy test loss number, they are pretty close. In terms of training loss for model 1- did you do an iteration of the model with less training data? that has a higher or lower training loss? – Rachel McGuigan Apr 01 '20 at 22:22
  • @RachelMcGuigan What's your rationale behind recommending training with partial training dataset? Actually, the results above are by partial data experiments. There was no problems for full dataset experiments, but it happens when we use partial dataset for fast comparison. it still holds for multiple repetition, so I wondered why this consistently happens. – 김은석 Apr 02 '20 at 01:16
  • Are you using the same test data in your iterations? – Rachel McGuigan Apr 02 '20 at 22:02
  • @RachelMcGuigan absolutely. – 김은석 Apr 02 '20 at 22:05

0 Answers0