1

I thought special techniques are needed. But experiments show little difference.

import numpy as np
import tensorflow as tf
p = np.random.rand(500)
print(f'prod       : {np.prod(p)}')
print(f'exp-sum-log: {np.exp(sum(np.log(p)))}') 
e = tf.constant(p)
print(f'tensorflow : {tf.math.reduce_prod(e)}')


prod       : 1.564231010023949e-224
exp-sum-log: 1.5642310100240046e-224
tensorflow : 1.5642310100239522e-224

prod       : 7.854750422663386e-232
exp-sum-log: 7.854750422664323e-232
tensorflow : 7.854750422663366e-232

prod       : 3.635104367139144e-211
exp-sum-log: 3.635104367137875e-211
tensorflow : 3.63510436713914e-211
user3015347
  • 503
  • 3
  • 12
  • What is your question exactly? should you trust the results? but what other options you have? – Mr. For Example Jan 10 '21 at 02:45
  • So you cannot trust `tensorflow` due to it having differences with `numpy` after the 13th or 14th decimal place? – yudhiesh Jan 10 '21 at 05:45
  • 1
    My concern is that a simple product of 500 probabilities, as computed by `np.prod` or `tf.math.reduce_prod` could be numerically unstable, though the three experiments show they are acceptable. Should I trust np.prod or tf.math.reduce_prod based on the three examples? Or my concern of numerical stability in this case is just a joke? – user3015347 Jan 10 '21 at 12:45

0 Answers0