-4

Im new to the world of Deep Learning and i would like to clarify something on my 1st Deep learning code, the MNIST example. Maybe also i'm completely wrong BTW so please take it easy :)

I have split the training data to batches, each one with a size of 50 and max epochs to 15 (or until the validation loss variable starts increasing).

I am getting 93% accuracy just on the 1st epoch, how is that possible if (as far as i know) on 1st epoch it has forward and backpropogate the complete training set just 1 time, so the training set have only abjust its weights and biases only once?

I thought i would get a fine accuracy after many epochs not just on 1st abjustance of the weights

desertnaut
  • 57,590
  • 26
  • 140
  • 166
  • Many sites include the training progress graphs of DL models. What do those say about the expected progress of MNIST training? – Prune Aug 28 '18 at 16:19
  • It's not abnormal; at the end of the 1st epoch the model has already seen some tens of thousands of training samples... – desertnaut Aug 28 '18 at 16:20

1 Answers1

0

Yes..you can get a good accuracy in the first epoch as well. It depends more on the complexity of the data and the model you build. sometimes if the learning rate is too high, than also it could so happen you get a higher training accuracy.

Also, coming to the adjusting weights and biases part, it could be a mini-batch training and for every mini-batch, the model updates the weights. So weights could have updated many times which is equal to number of training data images/ sample size

Mohanrac
  • 85
  • 6