-2

I have tried to implement gradient descent here in python but the cost J just seems to be increasing irrespective of lambda ans alpha value, i am unable to figure out what the issue over here is. It'll be great if someone can help me out with this. The input is a matrix Y and R with same dimensions. Y is a matrix of movies x users and R is just to say if a user has rated a movie.

#Recommender system ML
import numpy
import scipy.io

def gradientDescent(y,r):
        (nm,nu) = numpy.shape(y)          
        x =  numpy.mat(numpy.random.randn(nm,10))
        theta =  numpy.mat(numpy.random.randn(nu,10))
        for i in range(1,10):
                (x,theta) = costFunc(x,theta,y,r)


def costFunc(x,theta,y,r):

        X_tmp = numpy.power(x , 2)
        Theta_tmp = numpy.power(theta , 2)
        lmbda = 0.1
        reg = ((lmbda/2) * numpy.sum(Theta_tmp))+ ((lmbda/2)*numpy.sum(X_tmp))
        ans = numpy.multiply(numpy.power(((theta * x.T).T - y),2) , r)
        res = (0.5 * numpy.sum(ans))+reg
        print "J:",res
        print "reg:",reg
        (nm,nu) = numpy.shape(y)          
        X_grad = numpy.mat(numpy.zeros((nm,10)));
        Theta_grad = numpy.mat(numpy.zeros((nu,10)));
        alpha = 0.1
#       [m f] = size(X);
        (m,f) = numpy.shape(x);

        for i in range(0,m):                
                for k in range(0,f):
                        tmp = 0
#                       X_grad(i,k) += (((theta * x'(:,i)) - y(i,:)').*r(i,:)')' * theta(:,k);
                        tmp += ((numpy.multiply(((theta * x.T[:,i]) - y[i,:].T),r[i,:].T)).T) * theta[:,k];
                        tmp += (lmbda*x[i,k]);
                        X_grad[i,k] -= (alpha*tmp)

#                       X_grad(i,k) += (lambda*X(i,k));


#       [m f] = size(Theta); 
        (m,f) = numpy.shape(theta);


        for i in range(0,m):                
                for k in range(0,f):
                        tmp = 0
#                       Theta_grad(i,k) += (((theta(i,:) * x') - y(:,i)').*r(:,i)') * x(:,k);
                        tmp += (numpy.multiply(((theta[i,:] * x.T) - y[:,i].T),r[:,i].T)) * x[:,k];
                        tmp += (lmbda*theta[i,k]);
                        Theta_grad[i,k] -= (alpha*tmp)

#                        Theta_grad(i,k) += (lambda*Theta(i,k));

        return(X_grad,Theta_grad)

def main():
        mat1 = scipy.io.loadmat("C:\Users\ROHIT\Machine Learning\Coursera\mlclass-ex8\ex8_movies.mat")   
        Y = mat1['Y']
        R = mat1['R']   
        r = numpy.mat(R)
        y = numpy.mat(Y)   
        gradientDescent(y,r)

#if __init__ == '__main__':
main()
rohit
  • 83
  • 1
  • 2
  • 14
  • possible duplicate of [gradient descent using python and numpy ,machine learning](http://stackoverflow.com/questions/17784587/gradient-descent-using-python-and-numpy-machine-learning) – Thomas Jungblut Nov 10 '13 at 13:03

1 Answers1

0

I did not check the whole code logic, but assuming it is correct, your costfunc is supposed to return gradient of the cost function, and in these lines:

for i in range(1,10):
     (x,theta) = costFunc(x,theta,y,r)

you are overwriting the last values of x and theta with its gradient, while gradient is the measure of change, so you should move in the opposite direction (substract the gradient instead of overwriting the values):

for i in range(1,10):
     (x,theta) -= costFunc(x,theta,y,r)

But it seems that you already assign the minus sign to the gradient in your costfunc so you should add this value instead

for i in range(1,10):
     (x,theta) += costFunc(x,theta,y,r)
lejlot
  • 64,777
  • 8
  • 131
  • 164