0

Part of my assignment is to implement the Gradient Descent to find the best approximation of values c_1, c_2 and r_1 for the function enter image description here .

Given is only a list of 30 y-values corresponding to x from 0 to 30. I am implementing this in Enthought Canopy like this:

First I start with random values:

%matplotlib inline
import numpy as np
import matplotlib.pyplot as pyplt
c1 = -0.1
c2 = 0.1
r1 = 0.1
x = np.linspace(0,29,30) #start,stop,numitems
y = c1*np.exp(r1*x) + (c1*x)**3.0 - (c2*x)**2.0
pyplt.plot(x,y)

values_x = np.linspace(0,29,30)
values_y = np.array([0.2, -0.142682939241718, -0.886680607211679, -2.0095087143494, -3.47583798747496, -5.24396052331554, -7.2690008846359, -9.50451068338581, -11.9032604272567, -14.4176327390446, -16.9998176236069, -19.6019094345634, -22.1759550265352, -24.6739776668383, -27.0479889096801, -29.2499944927101, -31.2319972651608, -32.945998641919, -34.3439993255969, -35.3779996651013, -35.9999998336943, -36.161999917415, -35.8159999589895, -34.9139999796348, -33.4079999898869, -31.249999994978, -28.3919999975061, -24.7859999987616, -20.383999999385, -15.1379999996945])
pyplt.plot(values_x,values_y)

The squared error is quite high:

def Error(y,y0):
    return ( (1.0)*sum((y-y0)**2.0) )
print Error(y,values_y)

Now, to implement the gradient descent, I derived the partial derivative functions for c_1, c_2 and r_1 and implemented the Gradient Descent:

step_size = 0.0000005
accepted_Error = 50
dc1 = c1
dc2 = c2
dr1 = r1
y0 = values_y
previous_Error = 100000
left = True

for _ in range(1000):
    gc1 = (2.0) * sum( ( y - dc1*np.exp(dr1*x) - (dc1*x)**3 + (dc2*x)**2 ) * ( -1*np.exp(dr1*x) - (3*(dc1**2)*(x**3)) ) )
    gc2 = (2.0) * sum( ( y - dc1*np.exp(dr1*x) - (dc1*x)**3 + (dc2*x)**2 ) * ( 2*dc2*(x**2) ) )
    gr1 = (2.0) * sum( ( y - dc1*np.exp(dr1*x) - (dc1*x)**3 + (dc2*x)**2 ) * ( -1*dc1*x*np.exp(dr1*x) ) )

    dc1 = dc1 - step_size*gc1
    dc2 = dc2 - step_size*gc2
    dr1 = dr1 - step_size*gr1

    y1 = dc1*np.exp(dr1*x) + (dc1*x)**3.0 - (dc2*x)**2.0
    current_Error = Error(y0,y1)

    if (current_Error > accepted_Error):
        print currentError
    else: 
        break
    if (current_Error > previous_Error):
        print currentError
        print "DIVERGING"
        break
    if (current_Error==previous_Error):
        print "CAN'T IMPROVE"
        break

    previous_Error = current_Error

However, the error is not improving at all, and I tried to vary the step size. Is there a mistake in my code?

Navita Saini
  • 362
  • 3
  • 12
Marcel
  • 335
  • 1
  • 2
  • 10
  • I have reviewed the code again and again and the math seems all correct. I have to give up on it :( – Marcel Nov 20 '16 at 09:01
  • two possible bugs, you spell some variables as currentError and some as current_Error. You also do the update (previous_Error = current_Error) after the breaks (I believe this means that previous_Error is never updated) – Hugh Nov 20 '16 at 13:43

0 Answers0