Training accuracy and validation accuracy gives nearly 0.87, but in testing part using evaluate()
function gives fluctuated results according to different batch_size
parameter values. Testing accuracy varies from 0.5 to 0.66. Is the optimum batch_size
value for evaluate has to be same as in fit()
?
Asked
Active
Viewed 607 times
2

0x01h
- 843
- 7
- 13
1 Answers
0
I don't see how the batch size parameter of the evaluate function can change the accuracy of your model. Only the batch size used during the training can modify the performances of your model (see this). Are you testing the same trained model for your different tests? If you're testing newly trained models every time, it explains the variation of accuracy you observe (because of the random initialization of the weights for example).

Baptiste Pouthier
- 573
- 3
- 22
-
No, test case remains same. But tricky point is that how `batch_size` value in `evaluate()` changes test accuracy? – 0x01h Jul 30 '19 at 08:09
-
Normally it shouldn't changes test accuracy. Can you add some snippets of your code? – Baptiste Pouthier Jul 30 '19 at 08:19
-
Also, are you using a fancy RNN architecture which keep information from one batch to the next? This could explain your problem. – Baptiste Pouthier Jul 30 '19 at 08:34