What is the fastest algorithm (a link to C or C++ example would be cool) to check if a small square matrix (<16*16 elements) is singular (non-invertible, det = 0) ?
-
3Probably Gaussian Elimination. – nhahtdh May 31 '12 at 02:51
-
Not sure if it is the fastest, but the SVD will tell you. If any of the singular values found by the SVD are 0, then your matrix is singular. – Justin Peel May 31 '12 at 03:37
-
1@JustinPeel: LU decomposition will outperform SVD for the determinant, but SVD gives you more info: it tells you "which directions" are singular for the matrix. Anyway, testing if a matrix is numerically singular is best performed by computing (a bound on) its condition number, not by computing a determinant (determinant here is 16-linear, so small errors are raised to the 16th power), so SVD is OK if speed is not a serious issue. – Alexandre C. May 31 '12 at 08:14
-
I think it is a common stackoverflow situation: here's how to do X - is it really what you want to do? Why do you want to find the determinant/if the matrix is invertible? It is possible that you will want the SVD anyway to recover from the situation when the matrix is not invertible, or is almost not invertible. – mcdowella Jun 01 '12 at 04:22
4 Answers
Best way is to compute the condition number via SVD and check if it is greater than 1 / epsilon, where epsilon is the machine precision.
If you allow false negatives (ie. a matrix is defective, but your algorithm may not detect it), you can use the max(a_ii) / min(a_ii) formula from the Wikipedia article as a proxy for the condition number, but you have to compute the QR decomposition first (the formula applies to triangular matrices): A = QR with R orthogonal, then cond(A) = cond(Q). There are also techniques to compute the condition number of Q with O(N) operations, but there are more complex.

- 55,948
- 11
- 128
- 197
-
1Using LAPACK we can use DGECON/SGECON for double/single precision general matrices or one of the similar routines for matrices with usable structure (banded, symmetric, etc). This uses LU factorization – Michael Conlen Sep 17 '12 at 15:16
I agree with Gaussian elimination. http://math.nist.gov/javanumerics/jama/doc/Jama/LUDecomposition.html documents LU decomposition - after constructing the LU decomposition from a matrix you can call a method on it to get the determinant. My guess is that it is at least worth timing this to compare it with any more specialised scheme.

- 19,301
- 2
- 19
- 25
-
2LU decomposition is *the* standard way to compute determinants. Determinants are just not the standard way to test defectiveness of a matrix. – Alexandre C. May 31 '12 at 08:13
In R the difference is close, but the qr decomposition wins out. Try the following:
Sig <- matrix(rnorm(16),4,4)
a <- Sys.time()
for(j in 1:1000){
out <- svd(Sig)
cond <- any(out$d < 1e-10)
}
Sys.time()-a
a <- Sys.time()
for(j in 1:1000){
out <- qr(Sig)
cond <- out$rank < NROW(Sig)
}
Sys.time()-a

- 19
- 4
The fastest way is probably to hard code a determinant function for each size matrix you expect to deal with.
Here is some psuedo-code for N=3, but if you check out The Leibniz formula for determinants the pattern should be clear for all N.
function is_singular3(matrix) {
det = matrix[1][1]*matrix[2][2]*matrix[3][3]
- matrix[1][1]*matrix[2][3]*matrix[3][2]
- matrix[1][2]*matrix[2][1]*matrix[3][3]
+ matrix[1][2]*matrix[2][3]*matrix[3][1]
+ matrix[1][3]*matrix[2][1]*matrix[3][2]
- matrix[1][3]*matrix[2][2]*matrix[3][1];
if(det==0) return true
else return false
}

- 422
- 5
- 13
-
3An epsilon distance check from zero is in order for floating point matrices. – phs May 31 '12 at 04:25
-
-
5
-
9-1 For n=15 Leibniz's formula will have 1.3 trillion summands. This is not a realistic answer. – BlueRaja - Danny Pflughoeft May 31 '12 at 16:00
-
-1, the issue is not how to generate the code effectively. The issue is with the general time complexity of the problem. – RandomSort May 27 '13 at 11:33