It's not a problem of precision but one of scaling and the fact that eigenvectors are not unique. The only time the matrix of left eigenvectors (as rows) is guaranteed to be exactly the inverse of the matrix of right eigenvectors is for a Hermitian A
; although their product is always diagonal. Additionally, the rows of the inverse of the matrix of right eigenvectors are always left eigenvectors of A
but are not the only eigenvectors. The difference is scaling.
For your example (I'm going to use R
and L = W'
since I find it more natural):
>> A=[1+i,2-i,3;1,i,0.5i;5i,7,-2];
>> [R,D,W]=eig(A);
>> L = W';
>> Rinv = D/(A*R);
Are L
and Rinv
matrices of left eigenvectors?
>> [norm(L*A - D*L) , norm(Rinv*A - D*Rinv)]
ans =
1.0e-14 *
0.4254 0.9041
Yes, to relative machine precision.
Is the product of L
and R
diagonal?
>> LR = L*R
LR =
0.8319 + 0.0826i 0.0000 + 0.0000i 0.0000 - 0.0000i
0.0000 - 0.0000i 0.3976 + 0.4274i -0.0000 - 0.0000i
-0.0000 - 0.0000i 0.0000 + 0.0000i -0.3079 - 0.4901i
Yep.
Now what happens if we scale each left eigenvector (row) of L
such that the above product is the identity?
>> Lp = bsxfun(@rdivide,L,diag(LR))
Lp =
-0.4061 - 0.5332i -0.3336 + 0.6109i 0.7017 - 0.0696i
0.7784 + 0.0140i 0.9824 - 1.0560i 0.4772 - 0.1422i
0.2099 - 0.0812i -0.9004 + 1.4331i -0.2219 - 0.1422i
>> Rinv
Rinv =
-0.4061 - 0.5332i -0.3336 + 0.6109i 0.7017 - 0.0696i
0.7784 + 0.0140i 0.9824 - 1.0560i 0.4772 - 0.1422i
0.2099 - 0.0812i -0.9004 + 1.4331i -0.2219 - 0.1422i
We recover Rinv
with the re-scaling. And since Rinv
is a set of left eigenvectors, so it Lp
.
What was lost in the re-scaling?
>> [diag(L*L'),diag(Lp*Lp')]
ans =
1.0000 1.4310
1.0000 2.9343
1.0000 2.9846
The eigenvectors are no longer unit length.