0

I am new to Tensorflow and am working through the examples of regression examples given here tensorflow tutorials. Speicifically, I am working on the 3rd: "polynomial_regression.py"

I followed the linear regression example fine, and have now moved on to the polynomial regression.

However, I wanted to try substituting another set of data instead of that made up in the example. I did this by exchanging

 xs = np.asarray([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167,
                         7.042,10.791,5.313,7.997,5.654,9.27,3.1], dtype=np.float32)
ys = np.asarray([1.7,2.76,-2.09,3.19,1.9,1.573,3.366,2.596,2.53,1.221,
                         2.827,-3.465,1.65,-2.1004,2.42,2.94,1.3], dtype=np.float32)
n_observations = xs.shape[0]

for

n_observations = 100
xs = np.linspace(-3, 3, n_observations)
 ys = np.tan(xs) + np.random.uniform(-0.5, 0.5, n_observations)

I.e. the second was what was given in the example, and I wanted to try to run the same training with the new xs,ys, n_observation. These were the only lines I changed. I also tried changing the dtype of the array to be float64, but this did not change the output.

The output I am getting (which is from print(training_cost) is just a repeated nan. When I switch back to the original data, the network runs fine, and generates a fitting funciton.

Thank you for any ideas!

zephyrus
  • 1,266
  • 1
  • 12
  • 29
  • NaNs can be caused by many things, usually some form of numerical instability. Lowering the learning rate or using a more stable optimizer are good things to try. – Alexandre Passos Mar 08 '17 at 23:11
  • @AlexandrePassos - could you promote your comment to an answer, since we haven't heard more from OP? Thanks! – dga Nov 18 '17 at 15:46

1 Answers1

0

NaNs can be caused by many things, usually some form of numerical instability. Lowering the learning rate or using a more stable optimizer are good things to try.

Alexandre Passos
  • 5,186
  • 1
  • 14
  • 19