Your question is entirely ill posed, as others have noted, so I would encourage you to do some reading and think about what you really want to do first. Nonetheless, I can provide some guidance, which really has more to do with straightening out the concepts than programming. Based on the question and the comments, your question should be re-phrased as, "How does one estimate the covariance matrix for the parameters of an AR(n) model in which the parameters are obtained via maximum likelihood estimation?" In this case we can answer as follows.
- For the sake of discussion let's first generate some test data.
>> rng(0);
>> n = 10000;
>> x = rand(n,2);
- First obtain initial conditions smartly by running OLS. This is how most people estimate autoregressions anyway (nearly the entire literature on vector autoregressions) and OLS and MLE are asymptotically equivalent under some conditions. Include a column of zeros if you want to include a constant (which you want to do if you are working with non-demeaned data). In the output
b
is your parameter estiamte vector and C
is the corresponding vector of standard errors.
>> X = [x,ones(n,1)];
>> [b,C]=lscov(X,y)
b =
4.9825
9.9501
20.0227
C =
0.0347
0.0345
0.0266
- Before you can do maximum likelihood you need a likelihood first. We can just construct one as the composition of a couple of simple anonymous functions. We also need to construct an initial conditions vector that includes the standard deviation of the prediction error because this is one of the things we are going to estimate using MLE. In this case I am going to assume that you would like a Normally distributed error, and base my example on that, but you didn't say and of course could choose anything (not having a normally distributed error is typically the motivation for not doing OLS). Also, I will build a negative log likelihood since we are using minimization routines.
>> err = @(b) y - X*b
err =
@(b)y-X*b
>> std(err(b))
ans =
0.9998
>> b0 = [b; std(err(b))];
>> nll = @(b) (n/2)*log(2*pi*b(end)^2) + (1/(2*b(end)^2))*sum(err(b(1:end-1)).^2);
- Next use some type of minimization routine (e.g.,
fminsearch
, fminunc
, etc.) to do MLE.
>> bmle = fminsearch(nll,b0)
bmle =
4.9825
9.9501
20.0227
0.9997
Unsurprisingly, the estimates are almost identical to what we obtained under OLS, which is exactly what we expect when the errors are normally distributed, and which is why most people opt for just doing OLS unless there is some compelling reason to think the errors are non-Normal. The covariance matrix can be estimated by the outer product of the scores, which is a particularly simple expression in the Normal case.
>> inv(X'*X/bmle(end))
ans =
0.0012 0.0000 -0.0006
0.0000 0.0012 -0.0006
-0.0006 -0.0006 0.0007
Finally, the standard errors match what we obtained in the least squares case.
>> sqrt(diag(inv(X'*X/bmle(end))))
ans =
0.0347
0.0345
0.0266
EDIT: Sorry, I just realized my test data was cross sectional instead of time series data. I will fix this as soon as I get time. But the methodology for estimating the models stays the same.