I am trying to understand the relationship between the spectral density of a time series and its variance. From what I understand, the integral of the spectral density should be equal to the variance. At least according to most lecture notes such as these.
I am struggling to replicate this finding however. Lets say I generate a simple AR(1) series with an autoregressive coefficient of 0.9.
T = 1000;
rho = 0.9;
dat = zeros(T,1);
for ii = 2:T
dat(ii) = rho*dat(ii-1)+randn;
end
I then proceed to calculate the spectral density (autocov does the same thing as xcov in the signal toolbox, which i don't have. it is the covariances of the demeaned series, with the variance in the middle of the vector)
lag = 20;
autocovs = autocov(dat,lag);
lags = -lag:1:lag;
wb = 0:pi/64:pi;
rT = sqrt(length(dat));
weight = 1-abs(lags)/rT;
weight(abs(lags)>rT) = 0; %bartlett weight
for j = 1:length(wb)
sdb(j) = real(sum(autocovs'.*weight.*exp(-i*wb(j).*lags)))/(2*pi);
end
sdb = sdb;
sdb is the power density function and is certainly the correct shape for an AR(1), weighted towards low frequencies: enter image description here. But the sum of the power spectrum is 54.5, while the variance of the simulated AR(1) series is around 5.
What am I missing? I understood the spectral density to be how the variance of the series was distributed across the spectrum. I'm not sure if I have misunderstood the thoery or made a coding error. Any good references would be much appreciated.
Edit: I realized that obviously summing the "sdb" series is not taking the integral. To integrate between {-pi,pi}, I should be summing sdb*(2*pi/130), or equivalently sdb*(pi/65) since i am only looking at the {+pi} segment and sdb is symmetric for negative values. I still however seem to get a number that is bigger than the variance (even resimulating multiple times)... am I still missing something? The sdb line above becomes
sdb(j) = real(sum(autocovs'.*weight.*exp(-i*wb(j).*lags)))/(65);