3

I'm trying to determine the accuracy of my model without training and updating the weights so I've set all of my layers to trainable = False.

When I run fit_generator on a generator with shuffle = False, I get consistent results each time.

When I run fit_generator on a generator with shuffle = True, the results jump around a bit. Given that the input data is the same, and the model isn't training, I would expect the internal state of the model not to change and the accuracy to be the same on the same dataset regardless of ordering.

However this ordering dependency implies that some sort of state in the model is changing despite trainable = False. What's happening inside the model that's causing this?

Marcin Możejko
  • 39,542
  • 10
  • 109
  • 120
jvans
  • 2,765
  • 2
  • 22
  • 23

1 Answers1

0

This is a really interesting phenomenon. It probably arises due to the fact that most of neural networks packages use float32 precision - which gives you an accuracy up to 5-7 decimal point. Here you can read a detailed explaination.

Marcin Możejko
  • 39,542
  • 10
  • 109
  • 120