-1

I recently implemented a gradient descent code for linear regression. But when I'm increasing the number of iterations, I'm instead getting increasing values of 'w' and 'c' proportional to the number of iterations. Can anyone please tell me where the problem is? You can use your dataset to define 'x' and 'y'

w = c = 0
alpha = 0.0001
y_calc = w * x + c
n = len(x)
p = float(n)
u = 0
for u in range(100000):
    y_calc = w * x + c
    w = w + alpha * ((1/p) * np.sum(l * (y - y_calc)))
    c = c + alpha * ((1/p) * np.sum(y - y_calc))
    u += 1
print(w,c)

x = [32.50234527, 53.42680403, 61.53035803, 47.47563963, 59.81320787, 55.14218841, 52.21179669, 39.29956669, 48.10504169, 52.55001444, 45.41973014, 54.35163488, 44.1640495 , 58.16847072, 56.72720806, 48.95588857, 44.68719623, 60.29732685, 45.61864377, 38.81681754]

y = [31.70700585, 68.77759598, 62.5623823 , 71.54663223, 87.23092513, 78.21151827, 79.64197305, 59.17148932, 75.3312423 , 71.30087989, 55.16567715, 82.47884676, 62.00892325, 75.39287043, 81.43619216, 60.72360244, 82.89250373, 97.37989686, 48.84715332, 56.87721319]

Expected w to be 1.389738813163012 and c to be 0.03509461674147458

sanepunk
  • 1
  • 1
  • You have not defined `l` and note that you do not need `p = float(n)` or `u += 1` in your code, neither has any effect. – Matt Hall Jul 27 '23 at 11:01

1 Answers1

0

Gradient descent is sensitive to the scale of your features. Try scaling the data, for example by subtracting the mean and dividing by the standard deviation:

x = (x - np.mean(x)) / np.std(x)

This will stabilize the gradient calculation, even for a single feature.

Also, check your maths: normally you subtract alpha times the gradient to update the weight.

Matt Hall
  • 7,614
  • 1
  • 23
  • 36