-1

I'm trying to implement the Gradient descent method for solving $Ax = b$ for a positive definite symmetric matrix $A$ of size about $9600 \times 9600$. I thought my code was relatively simple

#Solves the problem Ax = b for x within epsilon tolerance or until MAX_ITERATION is reached
def GradientDescent(Amat,target,epsilon = .01,MAX_ITERATION = 100,x=np.zeros(9604):
    CurrentRes = target-np.matmul(Amat,x)
    count = 0
    while(np.linalg.norm(CurrentRes)> epsilon and count < MAX_ITERATION):
        Ar = np.matmul(Amat,CurrentRes)
        alpha = CurrentRes.T.dot(CurrentRes)/CurrentRes.T.dot(Ar)
        x = x+alpha*CurrentRes
        Ax = np.matmul(Amat,x)
        CurrentRes = target-Ax
        count = count+1
    return(x,count,norm(CurrentRes))
#A is square matrix about 9600x9600 and b is about 9600x1
GDSum = GradientDescent(A,b)

but the above takes almost 3 minutes to run a single iteration of the main while loop.

I didn't think that $9600 \times 9600$ was too big for NumPy to handle effectively, but even the step of computing alpha which is just the quotient of two dot products is taking over 30 seconds.

I tried error-testing the code by timing each action in the while loop, and they are all running much slower than expected. A single matrix multiplication is taking almost a minute. The steps involving vector addition or subtraction at least seem to be running quickly.

1 Answers1

1
#A is square matrix about 9600x9600 and b is about 9600x1
GDSum = GradientDescent(A,b)

Perhaps the most relevant bit of information is missing.

Your function is fast when A and b are Numpy arrays, but it's terribly slow when they are lists.

Is that your case?

aerobiomat
  • 2,883
  • 1
  • 16
  • 19
  • That is interesting. When I run the test code with A = np.ones((9604,9604)) and b = np.array(range(9604)) then it does run in about 16 seconds. I'm not sure what's going on though, because my matrix input is an np.ndarray type – MathManiac5772 Nov 16 '22 at 17:06
  • It appears it only runs quickly for some simple Numpy arrays. What information are you missing? I'm not sure I understand – MathManiac5772 Nov 16 '22 at 17:22
  • Sorry, I was referring to the type of the input data. – aerobiomat Nov 16 '22 at 17:29
  • I think I figured it out. When the vector b has shape (9604,1) it is slow, but when it has shape (9604,) it is fast. Why that makes the difference I cannot say. Is there some implicit call to an expensive reshape function? – MathManiac5772 Nov 16 '22 at 17:31