You might be confusing the Correlation matrix of a random vector (multivariate random variable), and the autocorrelation matrix of a random process (stochastic process)...
So if your serie is a vector autoregressive model of order 1 (which it seems to be, so h'
is your coefficient matrix), then indeed E[y(t-1)*y(t-1)']
makes sense, and is the Correlation matrix of the random vector itself.
Now under the assumption of stationarity, which you can check by checking that the roots x_i
of det(I - h'*x) = 0
are outside the unit circle (have modulus greater than 1), then the statistical properties of y[t_1]
are equivalent to those of y[t_2]
for all t_1, t_2
that are large enough. So in effect:
E[y(t-1)*y(t-1)'] = E[y(t)*y(t)']
If your process is NOT stationary, you're in trouble, since now your correlation matrix depends on the boundary conditions of t_0
...
What you might be looking for, however, are expressions like:
E[y(t)*y(t-1)'] = E[(h'*y(t-1) + n(t))*y(t-1)']
But I don't know if there are analytical representations of these in function of E[y(t)*y(t)']
... You can research that online, or in the references that your slides provide...
EDIT:
Since the OP has mentioned that this is a simple autoregressive model and not a vector autoregressive model, things are greatly simplified.
For stationary AR(1) models, there are nice analytical representations of the mean, variance and autocovariance (and thus autocorrelation), I'll give them here for the more general model: y(t) = c + h*y(t-1) + n(t)
E[y(t)] = c/(1-h) --> so in your case: 0
Var[y(t)] = Var[n(t)]/(1-h^2) --> this is equal to the E[y(t)y(t)] or E[y(t-1)y(t-1)] that you are looking for
Cov[y(t)y(t-j)] = Var[n(t)]*h^j/(1-h^2)
Corr[y(t)y(t-j)] = h^j --> this is the autocorrelation in function of the timedifference j
You can find all the mathematical derivations for these nicely explained in a reference book, or on the french wikipedia page: here, in the section "Moments d'un processus AR(1)"
It really boils down now to what you are looking for... E[y(t-1)y(t-1)]
is simply equal to E[y(t)y(t)]
by definition of stationarity, maybe you were really looking for the derivation of E[y(t)y(t-1)]
, which I will develop here:
E[y(t)y(t-1)] = E[(h*y(t-1) + n(t))*y(t-1)] = E[(h*y(t-1))*y(t-1)] + E[n(t)*y(t-1)]
Now since n(t)
is the white noise in t, it is uncorrelated with y(t-1), so E[n(t)*y(t-1)] = 0
, so we have:
E[y(t)y(t-1)] = E[(h*y(t-1))*y(t-1)] = h*E[(y(t-1))*y(t-1)] = h*Var[y(t)] = h*Var[N(t)]/(1-h^2)
Which matches exactly the definition of Cov[y(t)y(t-j)]
given above...
Hope this helps.