2

Matlab defines a column left eigenvector w of the matrix A by the equation

w*A=d w*

where w* is the conjugate transpose of w. This implies that when you diagonalize the matrix A by the tranformation D=S^{-1}AS where D is a diagonal matrix and the columns of S are (right) eigenvectors of A the rows of S are the conjugate transpose of w. However if I test this on a simple matrix,

A=[1+i,2-i,3;1,i,0.5i;5i,7,-2]

and obtain the left and right eigenvectors via

[S,D,W]=eig(A)

I don't see a relation between W* and S^{-1}. Is it a matter of precision? Multiplying W* with S gives a diagonal matrix with complex entries.

Community
  • 1
  • 1
Tarek
  • 1,060
  • 4
  • 17
  • 38

1 Answers1

2

It's not a problem of precision but one of scaling and the fact that eigenvectors are not unique. The only time the matrix of left eigenvectors (as rows) is guaranteed to be exactly the inverse of the matrix of right eigenvectors is for a Hermitian A; although their product is always diagonal. Additionally, the rows of the inverse of the matrix of right eigenvectors are always left eigenvectors of A but are not the only eigenvectors. The difference is scaling.

For your example (I'm going to use R and L = W' since I find it more natural):

>> A=[1+i,2-i,3;1,i,0.5i;5i,7,-2];
>> [R,D,W]=eig(A);
>> L = W';
>> Rinv = D/(A*R);

Are L and Rinv matrices of left eigenvectors?

>> [norm(L*A - D*L) , norm(Rinv*A - D*Rinv)]
ans =
   1.0e-14 *
    0.4254    0.9041

Yes, to relative machine precision.


Is the product of L and R diagonal?

>> LR = L*R
LR =
   0.8319 + 0.0826i   0.0000 + 0.0000i   0.0000 - 0.0000i
   0.0000 - 0.0000i   0.3976 + 0.4274i  -0.0000 - 0.0000i
  -0.0000 - 0.0000i   0.0000 + 0.0000i  -0.3079 - 0.4901i

Yep.


Now what happens if we scale each left eigenvector (row) of L such that the above product is the identity?

>> Lp = bsxfun(@rdivide,L,diag(LR))
Lp =
  -0.4061 - 0.5332i  -0.3336 + 0.6109i   0.7017 - 0.0696i
   0.7784 + 0.0140i   0.9824 - 1.0560i   0.4772 - 0.1422i
   0.2099 - 0.0812i  -0.9004 + 1.4331i  -0.2219 - 0.1422i

>> Rinv
Rinv =
  -0.4061 - 0.5332i  -0.3336 + 0.6109i   0.7017 - 0.0696i
   0.7784 + 0.0140i   0.9824 - 1.0560i   0.4772 - 0.1422i
   0.2099 - 0.0812i  -0.9004 + 1.4331i  -0.2219 - 0.1422i

We recover Rinv with the re-scaling. And since Rinv is a set of left eigenvectors, so it Lp.


What was lost in the re-scaling?

>> [diag(L*L'),diag(Lp*Lp')]

ans =

    1.0000    1.4310
    1.0000    2.9343
    1.0000    2.9846

The eigenvectors are no longer unit length.

Community
  • 1
  • 1
TroyHaskin
  • 8,361
  • 3
  • 22
  • 22
  • Thank you! Probably you need edit the first paragraph to be: The only time the left eigenvectors are scaled to unit length and the matrix of left eigenvectors (as rows) is guaranteed to be exactly the inverse of the matrix of right eigenvectors is for a Hermitian `A`. – Tarek Jun 07 '16 at 10:36
  • @Tarek That's one way of looking at it. But to me it is also a chicken-or-egg problem due to the inherent non-uniqueness of the eigenvectors. The fact that they're scaled to unity is typically either a by-product of the solution method or a standard choice of convenience for a solution to be returned by a solver. I chose the wording since only when `A` is Hermitian (or, more generally, Normal), the choice of scaling is irrelevant since the unitary nature of the left and right matrices have an inherent, orthogonal (orthonormal?) scaling against each other. – TroyHaskin Jun 07 '16 at 11:42