0

I am building a VAR(X) model to find the effects between advertising expenditures in different channels and Google Trends Search Volume Index for a specific brand and its competitors using daily time-series data.

However, when checking for residual autocorrelation the null hypothesis of no autocorrelation is rejected for a high number of lags. However I read contradicting information on this topic whether autocorrelation is a big issue. Could you please advise me on what might be the best option to overcome auto correlation? I am working with eviews.

Another issue I encounter has regard to the heteroskedacticity of the residuals which assumption is also violated. I cannot log transform the data because I have a lot of zero values.

I hope somebody could help me with these modelling issues.

KR, Larissa Komen

Haroon Lone
  • 2,837
  • 5
  • 29
  • 65
  • I would suggest you to ask this question on cross-validated.com. Stack Overflow is meant for programming questions. – Haroon Lone May 19 '16 at 04:48

1 Answers1

0

Serial autocorrelation ("autocorrelarion for a high number of lags") is usually a result of misspecification. Probably you used non-stationary time series. If this is the case, you could not make a VAR model but should make a vector error correction model. Or at least difference the data.

If your data are stationary try to play with number of lags. It usually helps.

And there is one more probable solution. Perhaps your data have structural breaks or outliers. In this case try to use dummies.

Hope this will help.

Katin
  • 177
  • 1
  • 13