0

I'm trying to estimate an EGARCH on python. Everything works but the problem is I get different values for alpha, Beta and gamma compared to Gretl and MatLab. I can't understand what I'm doing wrong. Any clue?

`

import pandas as pd
import numpy as np
import statsmodels.api as sm
from arch import arch_model
from arch.__future__ import reindexing

data = pd.read_excel('dati_mercati.xlsx')
price_eur=data["EUR"]
price_usa=data["USA"]
rend_eur = np.log(price_eur) - np.log(price_eur.shift(1))
rend_usa=np.log(price_usa)-np.log(price_usa.shift(1))

X = sm.add_constant(rend_usa.shift(1))
y=rend_usa
X = X.dropna()
y = y[X.index]

model = sm.OLS(y, X)

results = model.fit()
residui_usa = results.resid

model_usa = arch_model(residui_usa, vol='EGARCH', p=1, q=1, o=1, rescale=False)
model_usa_fit = model_usa.fit()
print(model_usa_fit.summary()) 
`

for exaple in matlab and gretl the leverage term is -0,1640 while in python it gives me -0,1652. the costant is -0,5080 in matlab and gretl but in python it is -0,501

the differences are too significant to be attributed to Maximum Likelihood estimation.

I expect same results

0 Answers0