I need to find out if matrix is positive definite. My matrix is numpy matrix. I was expecting to find any related method in numpy library, but no success. I appreciate any help.
11 Answers
You can also check if all the eigenvalues of matrix are positive, if so the matrix is positive definite:
import numpy as np
def is_pos_def(x):
return np.all(np.linalg.eigvals(x) > 0)

- 82,592
- 51
- 207
- 251
-
3You could use np.linalg.eigvals instead, which only computes the eigenvalues. Even then, it's much slower than @NPE's approach (3x for 10x10 matrices, 40x for 1000x1000). – jorgeca Apr 29 '13 at 10:09
-
40It is not true in general that all positive eigenvalues implies positive definiteness, unless you know that the matrix is symmetric (real case) or Hermitian (complex case). For example, A = array([[1, -100],[0, 2]]) is not positive definite. Some might include symmetric or Hermitian as part of the *definition* of "positive definite", but that is not universal. – Warren Weckesser Apr 29 '13 at 20:05
-
1@WarrenWeckesser Oops, that's right, not pedantic! In fact, checking symmetry is also needed if using `np.linalg.cholesky` (it doesn't check it and may return a wrong result, as your example also shows). I wonder how checking whether a non symmetric matrix is positive definite can be done numerically... – jorgeca May 02 '13 at 23:34
-
2You can do `np.all(x-x.T==0)` to check for symmetry – shinjin Jan 31 '14 at 00:04
-
3This is terribly inefficient! For matrices larger than about 6 or 7 rows/columns, use cholesky as pointed out by NPE below. The cholesky route feels less convenient (catching an exception etc) but it is much less wasteful. – Alex Flint Mar 31 '14 at 17:29
-
If you're working with hermitian or symmetric matrices, use `np.linalg.eigvalsh` instead; it's a lot faster – Joren Aug 31 '23 at 22:29
You could try computing Cholesky decomposition (numpy.linalg.cholesky
). This will raise LinAlgError
if the matrix is not positive definite.

- 486,780
- 108
- 951
- 1,012
-
11This should be substantially more efficient than the eigenvalue solution. – MRocklin Jul 22 '13 at 16:18
-
Just a note that in the positive semi-definite case, numerically speaking, one can also add a little identity to the matrix (thus shifting all eigenvalues a small amount e.g. a few times machine precision) then use the cholesky method as usual. – jawknee Jan 09 '18 at 17:19
-
2A quick warning: from the docs for `numpy.linalg.cholesky`: `No checking is performed to verify whether a is Hermitian or not. In addition, only the lower-triangular and diagonal elements of a are used.` In other words, if you test a non-symmetric or non-Hermitian matrix, `cholesky` may not raise a `LinAlgError` even if that matrix is not positive definite. See @DanielGarza's answer for a potential fix – Praveen Dec 01 '22 at 01:00
-
There seems to be a small confusion in all of the answers above (at least concerning the question).
For real matrices, the tests for positive eigenvalues and positive-leading terms in np.linalg.cholesky only applies if the matrix is symmetric. So first one needs to test if the matrix is symmetric and then apply one of those methods (positive eigenvalues or Cholesky decomposition).
For example:
import numpy as np
#A nonsymmetric matrix
A = np.array([[9,7],[6,14]])
#check that all eigenvalues are positive:
np.all(np.linalg.eigvals(A) > 0)
#take a 'Cholesky' decomposition:
chol_A = np.linalg.cholesky(A)
The matrix A is not symmetric, but the eigenvalues are positive and Numpy returns a Cholesky decomposition that is wrong. You can check that:
chol_A.dot(chol_A.T)
is different than A.
You can also check that all the python functions above would test positive for 'positive-definiteness'. This could potentially be a serious problem if you were trying to use the Cholesky decomposition to compute the inverse, since:
>np.linalg.inv(A)
array([[ 0.16666667, -0.08333333],
[-0.07142857, 0.10714286]])
>np.linalg.inv(chol_A.T).dot(np.linalg.inv(chol_A))
array([[ 0.15555556, -0.06666667],
[-0.06666667, 0.1 ]])
are different.
In summary, I would suggest adding a line to any of the functions above to check if the matrix is symmetric, for example:
def is_pos_def(A):
if np.array_equal(A, A.T):
try:
np.linalg.cholesky(A)
return True
except np.linalg.LinAlgError:
return False
else:
return False
You may want to replace np.array_equal(A, A.T) in the function above for np.allclose(A, A.T) to avoid differences that are due to floating point errors.

- 571
- 4
- 5
-
1Nitpick: you should probably check for `numpy.linalg.LinAlgError` unless you import it with `from numpy.linalg import LinAlgError`, which isn't a thing I would want to do if I would only catch this specific exception once or twice in my code. – Imperishable Night Jul 17 '18 at 15:32
-
2If working with complex matrices, this might lead to error (namely if A is complex positive definite, hence hermitian with strictly positive eigenvalues, the cholesky trick is still correct but it will not pass the first `if` statement for `np.allclose(A, A.T)`). Using `np.allclose(A, A.H)` will fix this (the OP says he is working with numpy matrix, if working with ndarray, use `A.conj().T` instead of `.H` – H. Rev. Feb 08 '19 at 13:22
To illustrate @NPE's answer with some ready-to-use code:
import numpy as np
def is_pd(K):
try:
np.linalg.cholesky(K)
return 1
except np.linalg.linalg.LinAlgError as err:
if 'Matrix is not positive definite' in err.message:
return 0
else:
raise

- 556
- 7
- 18
I don't know why the solution of NPE is so underrated. It's the best way to do this. I've found on Wkipedia that the complexity is cubic.
Furthermore, there it is said that it's more numerically stable than the Lu decomposition. And the Lu decomposition is more stable than the method of finding all the eigenvalues.
And, it is a very elegant solution, because it's a fact :
A matrix has a Cholesky decomposition if and only if it is symmetric positive.
So why not using maths ? Maybe some people are affraid of the raise of the exception, but it'a fact too, it's quite useful to program with exceptions.

- 324
- 2
- 12
-
3From the same Wikipedia page, it seems like your statement is wrong. The page says " If the matrix A is Hermitian and positive semi-definite, then it still has a decomposition of the form A = LL* if the diagonal entries of L are allowed to be zero.[3]" Thus a matrix with a Cholesky decomposition does not imply the matrix is symmetric positive definite since it could just be semi-definite. Am I interpreting this wrong? Also, it seems like you've just thrown "symmetric" across the implication. i.e. shouldn't it be every Hermitian positive-definite matrix has unique Cholesky decomposition? – user3731622 May 20 '16 at 18:23
-
1@user3731622 It is correct that Hermitian positive semi-definite matrices also have a Cholesky decomposition. But it is not a unique decomposition such as in the Hermitian positive definite case. So numpy decides to give you an error instead of randomly choosing one of possibly infinitely many decompositions to return. – CognizantApe Feb 25 '23 at 15:05
If you specifically want symmetric (hermitian, if complex) positive SEMI-definite matrices than the below will do. If you don't care about symmetry (hermitian, if complex) remove the 'if' state that checks for it. If you want positive definite rather than positive SEMI-definite than remove the regularization line (and change the value passed to 'np.lingalg.cholesky()' from 'regularized_X' to 'X'). The below
import numpy as np
def is_hermitian_positive_semidefinite(X):
if X.shape[0] != X.shape[1]: # must be a square matrix
return False
if not np.all( X - X.H == 0 ): # must be a symmetric or hermitian matrix
return False
try: # Cholesky decomposition fails for matrices that are NOT positive definite.
# But since the matrix may be positive SEMI-definite due to rank deficiency
# we must regularize.
regularized_X = X + np.eye(X.shape[0]) * 1e-14
np.linalg.cholesky(regularized_X)
except np.linalg.LinAlgError:
return False
return True

- 1,149
- 10
- 10
-
1@DeepRazi Numpy's Cholesky decomposition implementation works on complex numbers (i.e. complex np.dtype). So yes it works in that sense. But my code above originally checked if the transpose rather than the conjugate-transpose is equal to itself which makes the overall function invalid for complex numbers. I have now change the transpose to conjugate-transpose and it is now valid for complex numbers. – CognizantApe Oct 14 '20 at 19:41
-
1`np.all( X - X.H == 0 )` is too strict; I think `np.allclose(X, X.H)` is better. – syockit Jan 12 '23 at 06:24
-
@syockit Well if you only care if a matrix is approximately Hermitian as opposed to actually Hermitian then yes. But to be clear, because we are subtracting elements of X from itself, there is no floating point or other numerical error in the calculation `X - X.H`. So if you are constructing X in a reliably Hermitian way then this algorithm is the way to go. If you are approximating a Hermitian matrix then I agree that using `allclose` is better. – CognizantApe Feb 24 '23 at 19:52
For a real matrix $A$, we have $x^TAx=\frac{1}{2}(x^T(A+A^T)x)$, and $A+A^T$ is symmetric real matrix. So $A$ is positive definite iff $A+A^T$ is positive definite, iff all the eigenvalues of $A+A^T$ are positive.
import numpy as np
def is_pos_def(A):
M = np.matrix(A)
return np.all(np.linalg.eigvals(M+M.transpose()) > 0)

- 957
- 14
- 18
-
This is the only answer properly answering the question by OP : "how to determine if a matrix is DP". All the other answers confusingly make the assumption that symmetry is needed for a matrix to be definite positive, which is not the case. – Guillaume Apr 26 '20 at 11:09
For Not symmetric Matrix you can use the Principal Minor Test :
def isPD(Y):
row = X.shape [0]
i = 0
j = 0
for i in range(row+1) :
Step = Y[:i,:j]
j+=1
i+=1
det = np.linalg.det(Step)
if det > 0 :
continue
else :
return ("Not Positive Definite, Test Principal minor failed")
return ("Positive Definite")

- 1,278
- 3
- 16
- 31

- 21
- 3
Positive Definite
numpy.linalg.cholesky(x) # just handle the error LinAlgError
Positive Semi-Definite
np.all(np.linalg.eigvals(x) >= 0)
Notice: Most of cases also np.all(np.linalg.eigvals(x) > 0)
will give you if your matrix is PSD
even if you see >
and not only >=
, I got into this problem some days ago. I figure it should be something about round off error due to the fact that we have really small eigenvalues and even cholesky decomposition might generate an error.
Note
In order to test, you might want to create some Positive semi-definite matrix and some positive definite matrices:
n_size=4
a = np.random.rand(n_size)
A_PSD = np.outer(a,a) # the outer product of any vector generates a PSD matrix
A_PD = A_PSD+1e-5*np.identity(n_size) # little trick I found for PS matrix

- 6,890
- 7
- 46
- 67
-
To use `eigvals` to check for positive definiteness, you may need to compare to a certain epsilon. For example, `np.linalg.matrix_rank` uses machine epsilon multiplied by largest eigenvalue (and `max(x.shape)`), while `np.linalg.lstsq` uses machine epsilon in a similar manner. In fact, the above PSD test might fail if there is an eigenvalue below machine epsilon but with negative sign. – syockit Jan 12 '23 at 06:22
-
your comment is on point (I also notice a mistake in the code). when you are dealing with that situations, you are going to encounter the case you are talking about a lot, in my case is due to the tolerance rate of some solvers. most of the time, my work around is summing an identity matrix multiplied by a small value as in the last line of code in the answer, but this of course depends on your tolerance, in my case it's fine. – silgon Jan 12 '23 at 15:32
May I suggest this solution, valid for non-symmetric matrices. Basically it tries to find a non-zero vector z that minimizes the result of zT · M · z. As you see, it can either converge in a minimum (minV['success']
) or reach the max number of iteration (minV['status'] == 2
). If at this point the result is still positive, we could consider that the matrix is positive definite.
I guess there must be analytical methods to determine it, but I see a bit of confusion here about symmetric matrices (not a premise to be positive definite!!).
from scipy.optimize import minimize
import numpy as np
def is_pos_def(m,z0='rand', maxiter=100):
#tells if matrix is positive definite
if m.shape[0] != m.shape[1]:
raise Exception("Matrix is not square")
elif (m==m.T).all(): #symmetry testing
return np.all(np.linalg.eigvals(m) > 0)
else:
def f(z):
z=np.array(list(z))[:,np.newaxis]
return np.dot(np.dot(z.T, m),z)
if z0=='rand':
z0 = list(np.random.rand(m.shape[0]))
#constraints for a non-zero vector solution
cons = ({'type': 'ineq', 'fun': lambda z: np.sum(np.abs(z))})
minV = minimize(f, z0, method='COBYLA', options={'maxiter' : maxiter},constraints=cons);
if minV['success'] or minV['status'] == 2:
return minV['fun']+0 > 0
else:
return minV
This method works for both symmetric and non-symmetric, you can test with the following matrix (checked with wolfram alpha too)
m=np.array([[3, 0, 0],
[0, 2, 0],
[4, 3, 3]])

- 51
- 2
Numerical inaccuracy is a problem when determining whether a matrix is positive/negative -definite or semi-definite.
To illustrate this consider the following piece of code where I show that a positive-semi-definite matrix can look positive-definite or even indefinite due to numerical inaccuracy:
import numpy as np
np.random.seed(1234)
n = 5
A = np.random.uniform(-1, 1, (n, n))
B = A @ A.T # guaranteed to be positive-definite
w, v = np.linalg.eigh(B)
print(w)
# [0.00571615 0.29977027 0.44425488 2.67023429 3.82585751]
C = v @ np.diag(w) @ v.T
w = np.linalg.eigvalsh(C)
print(w)
# [0.00571615 0.29977027 0.44425488 2.67023429 3.82585751]
# Same as original!
w[0] = 0 # set smallest two eigen-values to zero
E = v @ np.diag(w) @ v.T
w = np.linalg.eigvalsh(E)
print(w)
# [4.06224137e-16 2.99770273e-01 4.44254883e-01 2.67023429e+00 3.82585751e+00]
# ^^^^^^^^^^^^^^
# Ooops! matrix is now positive-definite?!
w[:2] = 0 # set smallest two eigen-values to zero
E = v @ np.diag(w) @ v.T
w = np.linalg.eigvalsh(E)
print(w)
# [-5.14560681e-16 4.27158960e-17 4.44254883e-01 2.67023429e+00 3.82585751e+00]
# ^^^^^^^^^^^^^^^
# Ooops! matrix is not definite?!
The code above consists of the following steps:
- Construct a random matrix B which is guaranteed to be positive-definite.
- Extract eigen-values/vectors, reconstruct matrix from these and extract eigen-values again to show that they coincide with the original eigen-values.
- Make the smallest eigen-value zero, construct the resulting positive-semi-definite matrix, extract its eigen-values and show that it looks positive-definite.
- Make the two smallest eigen-value zero, construct the resulting positive-semi-definite matrix, extract its eigen-values and show that it looks indefinite.
This example shows that even if the matrix is known to be positive-semi-definite we might not get any eigen-values which are exactly zero and we might even get very small negative eigen-values making the matrix look indefinite.
To overcome this issue we need to include a tolerance with which we will determine whether eigen-values are positive/negative/zero.
Here is an example which shows that by using a tolerance we can correctly identify eigen-values that should have been zero and determine whether the matrix is positive/negative definite or semi-definite.:
tol = np.finfo(float).eps
w_abs = np.abs(w)
is_zero = w_abs < w_abs.max() * tol
print(is_zero)
# [ True True False False False]
# ^^^^ ^^^^
# first two (smallest) eigen-values correctly identified as being zero
is_positive = w[~is_zero] > 0
s = 'the matrix is '
if is_positive.all():
s += 'positive-'
elif not is_positive.any():
s += 'negative-'
if is_zero.any():
s += 'semi-'
s += 'definite'
print(s)
# the matrix is positive-semi-definite
Including a tolerance as in the example above primarily mitigates the issue of confusing a (semi-)definite matrix for an indefinite matrix, but there is still the issue that a definite matrix may well be misidentified as being semi-definite if some of the (strictly positive/negative) eigen-values are very small. I don't know of any robust solution for this and I suspect that it might just be an intrinsic limitation of finite precision arithmetic.
I have written the following simple function which determines whether a matrix is positive/negative definite or semi-definite with a tolerance:
def is_definite(x, sign=None, semi=None, tol=None):
"""
Determine whether a matrix is positive/negative definite or semi-definite.
Examples:
>>> import numpy as np
>>> np.random.seed(1234)
>>> x = np.random.uniform(-1, 1, (5, 5)) + 1j * np.random.uniform(-1, 1, (5, 5))
>>> is_definite(x)
False
>>> x = x @ x.conj().T # x @ x.conj().T is guaranteed to be positive-definite
>>> is_definite(x)
True
>>> is_definite(x, -1)
False
>>> is_definite(x, 1)
True
>>> w, v = np.linalg.eigh(x)
>>> w[0] = 0
>>> w
array([0. , 0.2272762 , 1.46465277, 4.61979679, 8.14691898])
>>> x = v @ np.diag(w) @ v.conj().T
>>> np.linalg.eigvalsh(x)
array([-3.18173864e-16, 2.27276198e-01, 1.46465277e+00, 4.61979679e+00, 8.14691898e+00])
>>> is_definite(x, 1)
False
>>> is_definite(x, -1)
False
>>> is_definite(x, 1, semi=True)
True
:param x: M*M-matrix or array of M*M-matrices
:param sign:
positive or negative number to check for positive or negative (semi-)definiteness respectively.
Default is to return True for any definite matrix (positive or negative).
:param semi: whether to check for semi-definiteness. Default is False
:param tol: tolerance, default is machine precision
:return: bool or array of bools
"""
x = np.asarray(x)
if x.ndim < 2:
raise ValueError('x must be at least two-dimensional')
x = (x + np.moveaxis(x, -1, -2).conj())/2
if tol is None:
tol = np.finfo(x.dtype).eps
w = np.linalg.eigvalsh(x)
w_abs = np.abs(w)
is_zero = w_abs < w_abs.max(-1, keepdims=True) * tol
if sign is None:
pos = w > 0
neg = w < 0
if semi:
pos |= is_zero
neg |= is_zero
else:
pos &= ~is_zero
neg &= ~is_zero
return pos.all(-1) | neg.all(-1)
ret = np.sign(w) == np.sign(sign)
if semi:
ret |= is_zero
else:
ret &= ~is_zero
return ret.all(-1)
def is_positive_definite(x, tol=None):
return is_definite(x, 1, False, tol)
def is_positive_semidefinite(x, tol=None):
return is_definite(x, 1, True, tol)
def is_negative_definite(x, tol=None):
return is_definite(x, -1, False, tol)
def is_negative_semidefinite(x, tol=None):
return is_definite(x, -1, True, tol)
def is_indefinite(x, tol=None):
return not is_definite(x, None, True, tol)

- 1,070
- 1
- 9
- 14