I am trying to use the Go gonum/mat
matrix library to find the Cholesky factorization of a matrix.
The goal is to get similar results to Matlab chol()
. In Matlab chol(P_inno)
outputs:
P_inno = [0.10062 -0.042635 -0.072741 -0.1434;
-0.042635 0.21293 -0.02717 -0.052181;
-0.072741 -0.02717 0.26536 0.27184;
-0.1434 -0.052181 0.27184 0.86335]
chol(P_inno) =
0.31721 -0.13441 -0.22932 -0.45207
0 0.44143 -0.13137 -0.25585
0 0 0.44217 0.30432
0 0 0 0.70775
So far in Go I have matrix:
arr := []float64{0.067503,0,0,0,0,0.063724,0,0,0,0,0.11575,0,0,0,0,0.05}
P_inno := mat.NewSymDense(4, arr)
// attempt at Cholesky matrix
var chol mat.Cholesky
if ok := chol.Factorize(P_inno); !ok {
fmt.Println("a matrix is not positive semi-definite.")
}
var t mat.TriDense
fmt.Println(chol.UTo(&t))
The code doesn't work because t TriDense
is empty. I'm not sure how to call UTo()
.
The library documentation says
The decomposition can be constructed using the Factorize method. The factorization itself can be extracted using the UTo or LTo methods, and the original symmetric matrix can be recovered with ToSym.
I could use help to understand how to do this step. I don't understand how to extract the actual factorization.
I have tried the fololwing:
chol.UTo(&t)
and chol.UTo(t)
(no value) used as value;
chol.Factorize(P_inno)
returns boolean, not factorization.