In deep learning training, I notice some weird stuff. Let me try to explain.
The overall number of training occurrences during laptop training differs from GPU training. Using TensorFlow Keras, I created a Convolutional Neural Network. My training instances total 31561, and my batch size is set at 32.
Training on Laptop
When the code is executed on my laptop, during training, I think the model updates the gradient after each batch, so it iterates in the following manner.
Epoch 1
1/987, 2/987, 3/987, ..., 987/987
As per my understanding, the model divided the total training data by the batch size and returned 987 (31561/32=987).
For more detail screen shot is attached
Training on GPU Machine
It's my university's GPU, which I can access virtually. When I execute the same code here, it presents training scenarios in the following manner during the training:
Epoch 1
1/31561, 2/31561, 3/31561,..., 31561/31561
For more detail see screen shot:
In this case, the model displays 31561 instead of 987. I'm not sure why the instances values changes when we execute the same code on two different platforms. Surprisingly, the results from both are almost similar.