1

Since this link only describes the variety of predictions based on learning approaches, I want to find out of curiosity why TensorFlow computations slightly vary.

import tensorflow as tf
sess = tf.Session() # TensorFlow session

var1 = tf.placeholder(tf.float32) # one placeholder
var2 = tf.placeholder(tf.float32) # another one
addition_node = var1 + var2 # Variable Addition Node

array = sess.run(addition_node, {var1: [1.1, 2.2, 3.3], var2:[1,1,1]}) # Array with values
print(array)

Expected ouput:

[ 2.1000000   3.20000000  4.30000000]

Actual output:

[ 2.0999999   3.20000005  4.30000019]
Community
  • 1
  • 1

1 Answers1

1

This is normal for 32-bit floating point values. Those 1.1, 2,2 and 3.3 values are not exactly represented in 32-bit floating point.

import numpy as np
x = np.array([1.1, 2.2, 3.3], dtype=np.float32)
y = np.array([1, 1, 1], dtype=np.float32)
x+y

>>> array([ 2.0999999 ,  3.20000005,  4.30000019], dtype=float32)

If you haven't read it, you might want to google "What Every Computer Scientist Should Know About Floating-Point Arithmetic" to understand better the limitations.

Roger Allen
  • 2,262
  • 17
  • 29
  • 1
    This! Specifically for python you can find more here: https://docs.python.org/2/tutorial/floatingpoint.html – rmeertens Feb 16 '17 at 14:49