I have a matrix in the form of DataFrame
df= 6M 1Y 2Y 4Y 5Y 10Y 30Y
6M n/a n/a n/a n/a n/a n/a n/a
1Y n/a 1 0.9465095 0.869504 0.8124711 0.64687 0.5089244
2Y n/a 0.9465095 1 0.9343177 0.8880676 0.7423546 0.6048189
4Y n/a 0.869504 0.9343177 1 0.9762842 0.8803984 0.7760753
5Y n/a 0.8124711 0.8880676 0.9762842 1 0.9117788 0.8404656
10Y n/a 0.64687 0.7423546 0.8803984 0.9117788 1 0.9514033
30Y n/a 0.5089244 0.6048189 0.7760753 0.8404656 0.9514033 1
I read the values from a matrix (real numbers) and whenever there is no data I insert 'n/a'
(need to maintain this format for other reasons).
I would like to compute the eigenvalues of subset of DataFrame that contains float values (essentially subset from '1Y'
to '30Y'
).
I can extract the subset using iloc
tmp = df.iloc[1:df.shapep[0],1:df.shape[1]]
and this extract the correct values (check the types and they are float). But when I try to compute the eigenvalues of tmp
using np.linalg.eigvalsh
I get an error
TypeError: No loop matching the specified signature and casting
was found for ufunc eigvalsh_lo
The strange thing is that when I start from a dataframe where 'n/a'
are replaces by '0.0'
the the whole process can be done with no problem (it needs to be initialized by 0.0
and not for instance 0
).
It seems that if some part of the dataframe is not real the subset extraction does not turn the values in real numbers.
Is there a way to overcome this problem?