I'm trying to determine the accuracy of my model without training and updating the weights so I've set all of my layers to trainable = False
.
When I run fit_generator
on a generator with shuffle = False
, I get consistent results each time.
When I run fit_generator
on a generator with shuffle = True
, the results jump around a bit. Given that the input data is the same, and the model isn't training, I would expect the internal state of the model not to change and the accuracy to be the same on the same dataset regardless of ordering.
However this ordering dependency implies that some sort of state in the model is changing despite trainable = False
. What's happening inside the model that's causing this?