0

Is there a way to measure the accuracy of an ARMA-GARCH model in Python using a prediction interval (alpha=0.05)? I fitted an ARMA-GARCH model on log returns and used some classical metrics such as RMSE, MSE (out-of-sample), AIC (in-sample), check on residuals and so on. I would like to add a prediction interval as another measurement of accuracy based on my ARMA-GARCH model predictions. I used the armagarch library (https://github.com/iankhr/armagarch). I already checked on how to use prediction intervals but not sure how to use it with ARMA-GARCH. I found these formula searching online: Estimator +- 1.96 (for 95%) * Standard Error. So far i got it, but i have several Standard Errors in my model output for each parameter in the ARMA and GARCH part, which one i have to use? Is there one Standard Error for the whole model itself?

I would be really happy if anyone could help.

ARMA-GARCH model output

So far I created an ARMA(2,2)-GARCH(1,1) model:

#final test of function

import armagarch as ag

#definitions framework
data = pd.DataFrame(data)


meanMdl = ag.ARMA(order = {'AR':2,'MA':2})
volMdl = ag.garch(order = {'p':1,'q':1})
distMdl = ag.normalDist()
model = ag.empModel(data, meanMdl, volMdl, distMdl)
model_fit = model.fit() 

After the model fit defining prediction length and Recieved two arrays as an output (mean + variance) put them into the correct length:

    #first array is mean, second is variance
pred = model.predict(nsteps=len(df_test))

#correct the shapes!
df_pred_mean = pd.DataFrame(np.reshape(pred[0], (len(df_test), 
1)))
df_pred_variance = pd.DataFrame(np.reshape(pred[1], 
(len(df_test), 1)))

So far so good, now i would like to implement a prediction interval. I got that one has to use the ARMA part +- 1.96 (95%)* GARCH prediction for each prediction. I implemented it for the upper and lower bound. It just shows the upper bound lower bound is same but using * (-1.96) at the end of the formula.

#upper bound 
df_all["upper bound"] =df_all["pred_Mean"]+df_all["pred_Variance"]*1.96

Using it on the actual log returns i trained the model with fails in the way its completely wrong. Now I'm unsure if the main approach i used is wrong or the model I used means the package.

prediction interval vs. actual log return

Lukas K
  • 1
  • 1
  • It would help if you included your actual code in the question, that way it's easier for others to reproduce and work with your problem. See also https://stackoverflow.com/help/minimal-reproducible-example and have a nice day – The Lemon Aug 18 '21 at 04:44
  • @ The Lemon I changed the main issue and added some code. Is it fine that way? I use this forum for the very first time, so I try my best. – Lukas K Aug 18 '21 at 08:25
  • looks great, unfortunately I know nothing about the topic haha, you'll have to wait and see if anyone vibes with the question – The Lemon Aug 18 '21 at 08:53

0 Answers0