0

I am beginner in gradient descent concept . I implemented a multivariate linear regression with gradient descent optimization algorithm. but my program doesn't converge and just early iteration has small changes! my methods(in my class) is following below :

def gradientDescent(self, X, y, theta):
        '''
        Fits the model via gradient descent
        Arguments:
            X is a n-by-d numpy matrix
            y is an n-dimensional numpy vector
            theta is a d-dimensional numpy vector
        Returns:
            the final theta found by gradient descent
        '''
        n,d = X.shape
        self.JHist = []
        
        for i in range(self.n_iter):
                self.JHist.append((self.computeCost(X, y, theta), theta))
                print("Iteration: ", i+1, " Cost: ", self.JHist[i][0], " Theta: ", theta)
                
                #vectorized////
                error=np.matmul(X,theta)-y
                
                theta -= self.alpha  * np.matmul(X.T, error) /n
                
                
            
        return theta
    def computeCost(self, X, y, theta):
            '''
            Computes the objective function
            Arguments:
            X is a n-by-d numpy matrix
            y is an n-dimensional numpy vector
            theta is a d-dimensional numpy vector
            
            
            
            n=len(y)
            d=np.matmul(X,theta)-y
            J=(1/(2*n))*np.matmul(d.T,d)
            return J

five last iterations report: enter image description here Could someone please help to identify the problem.

amir1122
  • 57
  • 5
  • Please start your app in a debugger, set a breakpoint, and see what your code is doing. – duffymo Nov 17 '21 at 19:09
  • X data is in range [-1,1] but y is in range(4000-5000) , can it damage to my model? should i normalize target values too? – amir1122 Nov 17 '21 at 23:59

0 Answers0