1

So I'm working at this pentadiagonial matrix A, size n : enter image description here

(Also here general information for pentadiagonial matrices : https://en.wikipedia.org/wiki/Pentadiagonal_matrix ) I'm using Cholesky Decomposition in order to get the matrix L of matrix A , where L*L.T=A ,(L.T is the transpose of L) according to the algorithm . So the standard algorithm that numpy has is :

def mycholesky(A):                                                                                                 
    """Performs a Cholesky decomposition of A, which must                                                          
    be a symmetric and positive definite matrix. The function                                                      
    returns the lower variant triangular matrix, L."""                                                             
    n = len(A)                                                                                                     

    # Create zero matrix for L                                                                                     
    #L = [[0.0] * n for i in range(n)]                                                                             

    n = len(A)                                                                                                     
    L = np.zeros((n, n), dtype=float)                                                                              





# Perform the Cholesky decomposition                                                                           

for i in range(n):                                                                                             
    for k in range(i+1):                                                                                       
        tmp_sum = sum(L[i][j] * L[k][j] for j in range(k))                                                 

        if (i == k):  # Diagonal elements                                                                      
            # LaTeX: l_{kk} = \sqrt{ a_{kk} - \sum^{k-1}_{j=1} l^2_{kj}}                                       
            L[i][k] = sqrt(A[i][i] - tmp_sum)                                                                  
        else:                                                                                                  
            # LaTeX: l_{ik} = \frac{1}{l_{kk}} \left( a_{ik} - \sum^{k-1}_{j=1} l_{ij} l_{kj} \right)          
            L[i][k] = (1.0 / L[k][k] * (A[i][k] - tmp_sum))                                                    
return L         

You can see the the page here , that has the math formula also . I modified the algorithm a bit fo Python3 and numpy use: https://www.quantstart.com/articles/Cholesky-Decomposition-in-Python-and-NumPy

Well I want to optimize the algorithm cause the A matrix I'm working on is a sparse one , I want to test it for very large n (i.e for n=10000) . With classical cholesky it isn't optimized cause there are lots of zeros that don't need to be accessed . What I've tried so far is changing the range of the code line

tmp_sum = sum(L[i][j] * L[k][j] for j in range(k))

to :

tmp_sum = sum(L[i][j] * L[k][j] for j in range(k-2,k))

in order not to compute the sum of the zeros also each time . Can it be optimized further ? Cause zeros are still accessed and computations aren't needed are being done. Or another solution maybe is taking a band matrix of our original A and applying the cholesky on it ?

1 Answers1

3

There is definitely more that can be done to optimize the Cholesky of a banded matrix (in your case, pentadiagonal).

In particular, I would point you to an existing solution inside Python infrastructure: scipy.linalg.cholesky_banded.

Using this would allow you to take advantage of the already implemented optimizations while looking at the actual code behind this subroutine, you can find the ideas behind.

The research in sparse matrix factorizations and preserving sparsity is very active, and a lot of solutions are available, so depending on your needs (coding/researching new algorithms vs getting particular matrices factorized) you have different courses of action.

Anton Menshov
  • 2,266
  • 14
  • 34
  • 55
  • Just in case, more elaborate and research-oriented questions on this topic can be more visible at [Computational Science SE](https://scicomp.stackexchange.com/). – Anton Menshov Dec 15 '19 at 21:35