2

I am trying to fit a very simple linear regression model using tensorflow. However, the loss (mean squared error) blows up instead of reducing to zero.

First, I generate my data:

x_data = np.random.uniform(high=10,low=0,size=100)
y_data = 3.5 * x_data -4 + np.random.normal(loc=0, scale=2,size=100)

Then, I define the computational graph:

X = tf.placeholder(dtype=tf.float32, shape=100)
Y = tf.placeholder(dtype=tf.float32, shape=100)
m = tf.Variable(1.0)
c = tf.Variable(1.0)
Ypred = m*X + c
loss = tf.reduce_mean(tf.square(Ypred - Y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=.1)
train = optimizer.minimize(loss)

Finally, run it for 100 epochs:

steps = {}
steps['m'] = []
steps['c'] = []

losses=[]

for k in range(100):
    _m = session.run(m)
    _c = session.run(c)
    _l = session.run(loss, feed_dict={X: x_data, Y:y_data})
    session.run(train, feed_dict={X: x_data, Y:y_data})
    steps['m'].append(_m)
    steps['c'].append(_c)
    losses.append(_l)

However, when I plot the losses, I get:

enter image description here

The complete code can also be found here.

highBandWidth
  • 16,751
  • 20
  • 84
  • 131

2 Answers2

5

Whenever you see your cost monotonically increase with the number epochs, that is a sure sign that your learning rate is too high. Repeatedly re-run your training with your learning rate multiplied by 1/10 each time until the cost function clearly decreases with the number of epochs.

stackoverflowuser2010
  • 38,621
  • 48
  • 169
  • 217
2

The learning rate is too large; 0.001 works well:

x_data = np.random.uniform(high=10,low=0,size=100)
y_data = 3.5 * x_data -4 + np.random.normal(loc=0, scale=2,size=100)
X = tf.placeholder(dtype=tf.float32, shape=100)
Y = tf.placeholder(dtype=tf.float32, shape=100)
m = tf.Variable(1.0)
c = tf.Variable(1.0)
Ypred = m*X + c
loss = tf.reduce_mean(tf.square(Ypred - Y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=.001)
train = optimizer.minimize(loss)
init = tf.global_variables_initializer()
with tf.Session() as session:
    session.run(init)
    steps = {}
    steps['m'] = []
    steps['c'] = []

    losses=[]

    for k in range(100):
    _m = session.run(m)
    _c = session.run(c)
    _l = session.run(loss, feed_dict={X: x_data, Y:y_data})
    session.run(train, feed_dict={X: x_data, Y:y_data})
    steps['m'].append(_m)
    steps['c'].append(_c)
    losses.append(_l)

plt.plot(losses)
plt.savefig('loss.png')

Plot of losses[![][1]

(Potentially useful reference: https://gist.github.com/fuglede/ad04ce38e80887ddcbeb6b81e97bbfbc)

fuglede
  • 17,388
  • 2
  • 54
  • 99