0

Screenshot

>>> boxes = tf.random_normal([ 5])
>>> with s.as_default():
...     s.run(boxes)
...     s.run(keras.backend.argmax(boxes,axis=0))
...     s.run(tf.reduce_max(boxes,axis=0))
...
array([ 0.37312034, -0.97431135,  0.44504794,  0.35789603,  1.2461706 ],
    dtype=float32)
3
0.856236

.

Why am I getting 0.8564. I expect the value to be 1.2461. since 1.2461 is big.right?

I am getting correct answer if i use tf.constant.
But I am not getting correct answer while using radom_normal
Bubesh p
  • 65
  • 1
  • 8

2 Answers2

0

Each time a new boxes is regenerated when you run s.run() with radom_normal. So your three results are different. If you want to get consistent results, you should only run s.run() once.

result = s.run([boxes,keras.backend.argmax(boxes,axis=0),tf.reduce_sum(boxes,axis=0)])
print(result[0])
print(result[1])
print(result[2])

#print
[ 0.69957364  1.3192859  -0.6662426  -0.5895929   0.22300807]
1
0.9860319

In addition, the code should be given in text format rather than picture format.

giser_yugang
  • 6,058
  • 4
  • 21
  • 44
0

TensorFlow is different from numpy because TF only uses symbolic operations. That means when you instantiate the random_normal, you don't get numeric values, but a symbolic normal distribution, so each time you evaluate it, you get different numbers.

Each time you operate with this distribution, with any other operation, you are getting different numbers, and that explains the results you see.

Dr. Snoopy
  • 55,122
  • 7
  • 121
  • 140