I'm getting started with TensorFlow. https://www.tensorflow.org/get_started/
While I was evaluating multiple times seeing how to feed the data, I found that the loss changes with executions.
eval_input_fn = tf.contrib.learn.io.numpy_input_fn({"x":x}, y, batch_size=4,
num_epochs=1)
estimator.evaluate(input_fn = eval_input_fn)
For example, I had losses following:
0.024675447 or 0.030844312 when batch_size == 2, num_epochs == 2
0.020562874 or 0.030844312 when batch_size == 4, num_epochs == 2
0.015422156 or 0.030844312 when batch_size == 4, num_epochs == 1
Is this phenomenon normal? I do not understand the principle behind it.
--- the following added
The same thing happens when I use next_batch and eval() without retraining as in https://www.tensorflow.org/get_started/mnist/pros. When I run the following cell:
# mnist.test.labels.shape: (10000, 10)
for i in range(10):
batch = mnist.test.next_batch(1000)
print("test accuracy %g"%accuracy.eval(feed_dict={
x: batch[0], y_: batch[1], keep_prob: 1.0}))
I got
a)
test accuracy 0.99
test accuracy 0.997
test accuracy 0.986
test accuracy 0.993
test accuracy 0.994
test accuracy 0.993
test accuracy 0.995
test accuracy 0.995
test accuracy 0.99
test accuracy 0.99
b)
test accuracy 0.99
test accuracy 0.997
test accuracy 0.989
test accuracy 0.992
test accuracy 0.993
test accuracy 0.992
test accuracy 0.994
test accuracy 0.993
test accuracy 0.993
test accuracy 0.99
and they (and their average) keep changing.