0

I need to compute the matrix A on the power of -1/2, which basically means the square root of the initial matrix's inverse.

If A is singular then the Moore-Penrose generalized inverse is computed with the ginv function from the MASS package, otherwise the regular inverse is computed using the solve function.

Matrix A is defined below:

A <- structure(c(604135780529.807, 0, 58508487574887.2, 67671936726183.9, 
            0, 0, 0, 1, 0, 0, 0, 0, 58508487574887.2, 0, 10663900590720128, 
            10874631465443760, 0, 0, 67671936726183.9, 0, 10874631465443760, 
            11315986615387788, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1), .Dim = c(6L, 
                                                                                   6L))

I check singularity with the comparison of the rank and the dimension.

rankMatrix(A) == nrow(A)

The above code returns FALSE, So I have to use ginv to get the inverse. The inverse of A is as follows:

A_inv <- ginv(A)

The square-root of the inverse matrix is computed with the sqrtm function from the expm package.

library(expm)
sqrtm(A_inv)

The function returns the following error:

Error in solve.default(X[ii, ii] + X[ij, ij], S[ii, ij] - sumU) :
Lapack routine zgesv: system is exactly singular

So how can we compute the square root in this case? Please note that matrix A is not always singular so we have to provide a general solution for the problem.

GyD
  • 3,902
  • 2
  • 18
  • 28
  • 1
    Compute a singular value decomposition. Do whatever you want to the diagonal matrix in the middle. – tmyklebu Jan 30 '15 at 00:14

1 Answers1

6

Your question relates to two distinct problems:

  1. Inverse of a matrix
  2. Square root of a matrix

Inverse

The inverse does not exist for singular matrices. In some applications, the Moore-Penrose or some other generalised inverse may be taken as a suitable substitute for the inverse. However, note that computer numerics will incur rounding errors in most cases; and these errors may make a singular matrix appear regular to the computer or vice versa.

If A always exhibits the the block structure of the matrix you give, I suggest to consider only its non-diagonal block

A3 = A[ c( 1, 3, 4 ), c( 1, 3, 4 ) ]

A3
             [,1]         [,2]         [,3]
[1,] 6.041358e+11 5.850849e+13 6.767194e+13
[2,] 5.850849e+13 1.066390e+16 1.087463e+16
[3,] 6.767194e+13 1.087463e+16 1.131599e+16

instead of all of A for better efficiency and less rounding issues. The remaining 1-diagonal entries would remain 1 in the inverse of the square root, so no need to clutter the calculation with them. To get an impression of the impact of this simplification, note that R can calculate

A3inv = solve(A3)

while it could not calculate

Ainv = solve(A)

But we will not need A3inverse, as will become evident below.

Square root

As a general rule, the square root of a matrix A will only exist if the matrix has a diagonal Jordan normal form (https://en.wikipedia.org/wiki/Jordan_normal_form). Hence, there is no truly general solution of the problem as you require.

Fortunately, like “most” (real or complex) matrices are invertible, “most” (real or complex) matrices have a diagonal complex Jordan normal form. In this case, the Jordan normal form

A3 = T·J·T⁻¹

can be calculated in R as such:

X = eigen(A3)
T = X$vectors
J = Diagonal( x=X$values )

To test this recipe, compare

Tinv = solve(T)
T %*% J %*% Tinv

with A3. They should match (up to rounding errors) if A3 has a diagonal Jordan normal form.

Since J is diagonal, its squareroot is simply the diagonal matrix of the square roots

Jsqrt = Diagonal( x=sqrt( X$values ) )

so that Jsqrt·Jsqrt = J. Moreover, this implies

(T·Jsqrt·T⁻¹)² = T·Jsqrt·T⁻¹·T·Jsqrt·T⁻¹ = T·Jsqrt·Jsqrt·T⁻¹ = T·J·T⁻¹ = A3

so that in fact we obtain

√A3 = T·Jsqrt·T⁻¹

or in R code

A3sqrt = T %*% Jsqrt %*% Tinv

To test this, calculate

A3sqrt %*% A3sqrt

and compare with A3.

Square root of the inverse

The square root of the inverse (or, equally, the inverse of the sqare root) can be calculated easily once a diagonal Jordan normal form has been calculated. Instead of J use

Jinvsqrt = Diagonal( x=1/sqrt( X$values ) )

and calculate, analogously to above,

A3invsqrt = T %*% Jinvsqrt %*% Tinv

and observe

A3·A3invsqrt² = … = T·(J/√J/√J)·T⁻¹ = 1

the unit matrix so that A3invsqrt is the desired result.

In case A3 is not invertible, a generalised inverse (not necessarily the Moore-Penrose one) can be calculated by replacing all undefined entries in Jinvsqrt by 0, but as I said above, this should be done with suitable care in the light of the overall application and its stability against rounding errors.

In case A3 does not have a diagonal Jordan normal form, there is no square root, so the above formulas will yield some other result. In order not to run into this case at times of bad luck, best implement a test whether

A3invsqrt %*% A3 %*% A3invsqrt

is close enough to what you would consider a 1 matrix (this only applies if A3 was invertible in the first place).

PS: Note that you can prefix a sign ± for each diagonal entry of Jinvsqrt to your liking.

  • as for your answer; how can you be sure that when you pass from A to A3, you don't lose the true context? Namely, the same inferences with both A and A3-afterwards? – Erdogan CEVHER Aug 31 '17 at 10:34
  • 1
    A has the form * 0 * * 0 0 0 1 0 0 0 0 * 0 * * 0 0 * 0 * * 0 0 0 0 0 0 1 0 0 0 0 0 0 1 (think a line break after each row of 6) where * marks what I call the interesting entries of A, which constitute the 3×3 matrix A3. Apart from these, A is a diagonal one matrix. The context which is lost is thus easily restored at the end, because the inverse of the square root of 1 is 1, and the same holds for a one diagonal one matrix: All you have to do is simply merge the matrix `A3invsqrt` into the above form by inserting its numbers into the * positions in the straightforward order. – Bernhard Bodenstorfer Sep 01 '17 at 17:46
  • 1
    Erratum: I must correct a mistake relating to the exceptional case: Even in case of non-diagonal Jordan normal form, square-roots exist, unless an associated eigenvalue happens to be 0. A square-root of a given Jordan block of dimension n for eigenvalue λ≠0 is triangular and has all entries equal in its diagonal and each superdiagonal. They can be calculated from a system of equations where n determines the number of variables. The system can be solved step by step (variable by variable) similar to a triangular system of linear equations. – Bernhard Bodenstorfer May 25 '18 at 07:06