I am trying to implement the gradient descent algorithm using python and following is my code,
def grad_des(xvalues, yvalues, R=0.01, epsilon = 0.0001, MaxIterations=1000):
xvalues= np.array(xvalues)
yvalues = np.array(yvalues)
length = len(xvalues)
alpha = 1
beta = 1
converged = False
i=0
cost = sum([(alpha + beta*xvalues[i] - yvalues[i])**2 for i in range(length)]) / (2 * length)
start_time = time.time()
while not converged:
alpha_deriv = sum([(alpha + beta*xvalues[i] - yvalues[i]) for i in range(length)]) / (length)
beta_deriv = sum([(alpha + beta*xvalues[i] - yvalues[i])*xvalues[i] for i in range(length)]) / (length)
alpha = alpha - R * alpha_deriv
beta = beta - R * beta_deriv
new_cost = sum( [ (alpha + beta*xvalues[i] - yvalues[i])**2 for i in range(length)] ) / (2*length)
if abs(cost - new_cost) <= epsilon:
print 'Converged'
print 'Number of Iterations:', i
converged = True
cost = new_cost
i = i + 1
if i == MaxIterations:
print 'Maximum Iterations Exceeded'
converged = True
print "Time taken: " + str(round(time.time() - start_time,2)) + " seconds"
return alpha, beta
This code is working fine. But the problem is, it is taking more than 25 seconds for approximately for 600 iterations. I feel this is not efficient enough and I tried converting it to a array before doing the calculations. That did reduce the time from 300 to 25 seconds. Still I feel it can be reduced. Can anybody help me in improving this algorithm?
Thanks