4

Given an N x N symmetric matrix C and an N x N diagonal matrix I, find the solutions of the equation det(λI-C)=0. In other words, the (generalized) eigenvalues of C are to be found.

I know few ways how to solve this in MATLAB using build-in functions:

1st way:

function lambdas=eigenValues(C,I)
    syms x;
    lambdas=sort(roots(double(fliplr(coeffs(det(C-I*x))))));

2nd way:

[V,D]=eig(C,I);

However, I need to use Python. There are similar function in NumPy and SymPy, but, according to docs (numpy, sympy), they take only one matrix C, as the input. Though, the result's different from the result produced by Matlab. Also, symbolic solutions produced by SymPy aren't helpful. Maybe I am doing something wrong? How to find solution?


Example

  • MATLAB:

%INPUT

I =

     2     0     0
     0     6     0
     0     0     5
C =

     4     7     0
     7     8    -4
     0    -4     1

[v,d]=eig(C,I)

%RESULT

v =

    -0.3558   -0.3109   -0.5261
     0.2778    0.1344   -0.2673
     0.2383   -0.3737    0.0598


d =

       -0.7327    0         0
        0    0.4876         0
        0         0    3.7784
  • Python 3.5:

%INPUT

I=np.matrix([[2,0,0],
                 [0,6,0],
                 [0,0,5]])

C=np.matrix([[4,7,0],[7,8,-4],[0,-4,1]])


np.linalg.eigh(C)

%RESULT

(array([-3., 1.91723747, 14.08276253]),


matrix(

       [[-0.57735027,  0.60061066, -0.55311256],

        [ 0.57735027, -0.1787042 , -0.79670037],

        [ 0.57735027,  0.77931486,  0.24358781]]))
Rodrigo de Azevedo
  • 1,097
  • 9
  • 17
andreyxdd
  • 73
  • 2
  • 7

3 Answers3

1

At least if I has positive diagonal entries you can simply solve a transformed system:

# example problem
>>> A = np.random.random((3, 3))
>>> A = A.T @ A
>>> I = np.identity(3) * np.random.random((3,))

# transform 
>>> J = np.sqrt(np.einsum('ii->i', I))
>>> B = A / np.outer(J, J)

# solve
>>> eval_, evec = np.linalg.eigh(B)

# back transform result
>>> evec /= J[:, None]

# check
>>> A @ evec
array([[ -1.43653725e-02,   4.14643550e-01,  -2.42340866e+00],
       [ -1.75615960e-03,  -4.17347693e-01,  -8.19546081e-01],
       [  1.90178603e-02,   1.34837899e-01,  -1.69999003e+00]])
>>> eval_ * (I @ evec)
array([[ -1.43653725e-02,   4.14643550e-01,  -2.42340866e+00],
       [ -1.75615960e-03,  -4.17347693e-01,  -8.19546081e-01],
       [  1.90178603e-02,   1.34837899e-01,  -1.69999003e+00]])

OP's example. IMPORTANT: must use np.arrays for I and C, np.matrix will not work.

>>> I=np.array([[2,0,0],[0,6,0],[0,0,5]])
>>> C=np.array([[4,7,0],[7,8,-4],[0,-4,1]])
>>> 
>>> J = np.sqrt(np.einsum('ii->i', I))
>>> B = C / np.outer(J, J)
>>> eval_, evec = np.linalg.eigh(B)
>>> evec /= J[:, None]
>>> 
>>> evec
array([[-0.35578356, -0.31094779, -0.52605088],
       [ 0.27778714,  0.1343625 , -0.267297  ],
       [ 0.23826117, -0.37371199,  0.05975754]])
>>> eval_
array([-0.73271478,  0.48762792,  3.7784202 ])

If I has positive and negative entries use eig instead of eigh and before taking the square root cast to complex dtype.

Paul Panzer
  • 51,835
  • 3
  • 54
  • 99
1

Differing from other answers, I assume that by the symbol I you mean the identity matrix, Ix=x.

What you want to solve, Cx=λIx, is the so-called standard eigenvalue problem, and most eigenvalue solvers tackle the problem described in that format, hence the Numpy function has the signature eig(C).

If your C matrix is a symmetric matrix and your problem is indeed a standard eigenvalue problem I'd recommend the use of numpy.linalg.eigh, that is optimized for this type of problems.

On the contrary if your problem is really a generalized eigenvalue problem, as, e.g., the frequency equation Kx=ω²Mx you could use scipy.linalg.eigh, that supports that type of problem statement for symmetric matrices.

eigvals, eigvecs = scipy.linalg.eigh(C, I)

With respect to the discrepancies in eigenvalues, the Numpy implementation gives no guarantees w/r to their ordering, so it could be just a different ordering, but if your problem is indeed a generalized problem (I not being the identity matrix...) the solution is of course different and you have to use the Scipy implementation of eigh.

If the discrepancies is within the eigenvectors, please remember that the eigenvectors are known within an arbitrary scale factor and, again, the ordering could be undefined (but, of course, their order is the same order in which you have the eigenvalues) — the situation is a little different for scipy.linalg.eigh because in this case the eigenvalues are sorted and the eigenvectors are normalized with respect to the second matrix argument (I in your example).


Ps: scipy.linalg.eigh behaviour (i.e., sorted eigenvalues and normalized eigenvectors) is so convenient for my use cases that I use to use it also to solve standard eigenvalue problems.

gboffi
  • 22,939
  • 8
  • 54
  • 85
0

Using SymPy:

>>> from sympy import *
>>> t = Symbol('t')
>>> D = diag(2,6,5)
>>> S = Matrix([[ 4, 7, 0],
                [ 7, 8,-4],
                [ 0,-4, 1]])
>>> (t*D - S).det()
60*t**3 - 212*t**2 - 77*t + 81

Computing the exact roots:

>>> roots = solve(60*t**3 - 212*t**2 - 77*t + 81,t)
>>> roots
[53/45 + (-1/2 - sqrt(3)*I/2)*(312469/182250 + sqrt(797521629)*I/16200)**(1/3) + 14701/(8100*(-1/2 - sqrt(3)*I/2)*(312469/182250 + sqrt(797521629)*I/16200)**(1/3)), 53/45 + 14701/(8100*(-1/2 + sqrt(3)*I/2)*(312469/182250 + sqrt(797521629)*I/16200)**(1/3)) + (-1/2 + sqrt(3)*I/2)*(312469/182250 + sqrt(797521629)*I/16200)**(1/3), 53/45 + 14701/(8100*(312469/182250 + sqrt(797521629)*I/16200)**(1/3)) + (312469/182250 + sqrt(797521629)*I/16200)**(1/3)]

Computing floating-point approximations of the roots:

>>> for r in roots:
...     r.evalf()
... 
0.487627918145732 + 0.e-22*I
-0.73271478047926 - 0.e-22*I
3.77842019566686 - 0.e-21*I

Note that the roots are real.

Rodrigo de Azevedo
  • 1,097
  • 9
  • 17