0

I have n arbitrary p x 1 vectors x_i, and p x k matrices A_i, and n p x p positive semidefinite matrices S_i, where some (often most) of the *S_i*'s are same (for example only two different S matrices, one positive definite which applies to i=1,..., n-1 and semidefinite S for i=n). I want to make following linear transformations for all x_i and A_i:

x_i* = inv(L_i)x_i and

A_i* = inv(L_i)A_i, where

L_i is a lower triangular matrix so that L_i L'_i=S_i (or even better L_i D_i L'_i=S_i where D_i is diagonal matrix.)

n can be range from few to thousands or even more, p is usually less than 10 and k is usually less than 100. S can contain rows and columns with zeros, and it is originally formed as _BB' where B is a lower tridiagonal matrix with nonnegative diagonal elements.These B's are not available though.

I'm wondering what would be the optimal way of accomplishing this in terms of speed and accuracy, with more weight on accuracy?

Currently I'm using my self written LDL decomposition function for different S, invert L and compute the transformations, as I have beem dealing with cases with small number of different S and large n. If I have understood correctly, it would be wiser to just solve the linear equations without explicitly inverting the matrix in terms of accuracy, but for speed it seems to be better option?

I'm using Fortran, and I would rather use some LAPACK function for the factorization instead of my own LDL decomposition which I cannot guarantee to be stable etc., but I'm not sure which would be the good way.

Jouni Helske
  • 6,427
  • 29
  • 52
  • What kind of equations are you working with? You have rectangular matrices S which are rank-deficient, so you could use SVD or a rank-revealing QR or LU decomposition with pivoting. Factorisation and solution algorithms for these should be available in LAPACK. However, they are not symmetric in the sense you describe; why is that necessary? – sigma Dec 16 '12 at 22:01
  • Idea here is to decorrelate x_i's which are defined as x_i=A_i*z_i+e_i, where cov(e_i)=S_i, so using symmetrical decomposition I can transform the equations using L_i and use cov(e_i)=D_i. – Jouni Helske Dec 17 '12 at 06:50
  • So this is a generalised least-squares problem `z = (A'*S**-1*A)**-1 * A'*S**-1*x`? You say your S_i are p x k, but how do you get nonsquare covariance matrices? BB' was also square, after all. It doesn't look like the LDL' factorisation itself is easily obtainable from LAPACK, but solver routines employing it do exist. You can always use Fortran's `cpu_time` or other ways to check your program's performance. – sigma Dec 17 '12 at 17:24

1 Answers1

0

What you need is I guess the Cholesky transformation and a solver which uses the Cholesky transformed form. Going for LAPACK is definitely a good idea. Look for the ?potrf (Cholesky decomposition) and ?potrs (solving of linear equations in the Cholesky form) functions. See the Linear equation section of the LAPACK User Guide.

Bálint Aradi
  • 3,754
  • 16
  • 22
  • 1
    Aren't those for positive definite, rather than positive semidefinite matrices? – dmuir Dec 13 '12 at 19:21
  • You're right, that only works for positive definite matrices. Otherwise, the LU decomposition (?getrf) could be used to decompose the matrix into L . U. – Bálint Aradi Dec 13 '12 at 19:39
  • Yes but I need symmetric decomposition such as LDL. I realized that if there is constant variables (corresponding row and column are zero), I can work with the subsets of S, x and A, but it even this subset of S can still be only semidefinite. – Jouni Helske Dec 14 '12 at 07:04
  • OK. But as far as I remember from numerical mathematics, LDL decomposition is only warranted to work for Hermitian positive definite matrices (see e.g. [Wikipedia entry about Cholesky decomposition](http://en.wikipedia.org/wiki/Cholesky_decomposition)). – Bálint Aradi Dec 14 '12 at 07:08