0

I want to calibrate jointly the drift mu and volatility sigma of a geometric brownian motion,

log(S_t) = log(S_{t-1}) + (mu - 0.5*sigma^2)*Deltat + sigma*sqrt(Deltat)*Z_t

where Z_t is a standard normally distributed random variable, and am testing this by generating data x = log(S_t) via

x(1) = 0;
for i = 2:N
  x(i) = x(i-1) + (mu-0.5*sigma^2)*Deltat + sigma*sqrt(Deltat)*randn;
end

and my (log-)likelihood function

function LL = LL(x, pars)
mu    = pars(1);
sigma = pars(2);
Nt = size(x,2);
LL = 0;
for j = 2:Nt
  LH_j = normpdf(x(j), x(j-1)+(mu-0.5*sigma^2)*Deltat, sigma*sqrt(Deltat));
  LL = LL + log(LH_j);
end  

which I maximize using fmincon (because sigma is constrained to be positive), with starting values 0.15 and 0.3, true values 0.1 and 0.2, and N = Nt = 1000 or 100000 generated points over one year (=> Deltat = 0.0001 or 0.000001).

Calibrating the volatility alone yields a nice likelihood function with a maximum around the true parameter, but for small Deltat (less than say 0.1) calibrating both mu and sigma persistently shows a (log-)likelihood surface being very flat in mu (at least around the true parameter); I would expect also a maximum there; for a reason I think it should be possible to calibrate a GBM model to a data series of 100 stock prices in a year, making the average of Deltat = 0.01.

Any sharing of experience or help is greatly appreciated (thoughts passing through my mind: the likelihood function is not right / this is a normal behaviour / too few data points / data generation is not correct / ...?).
Thanks!

Futurist
  • 75
  • 1
  • 8
  • 1
    Sounds like your question is more about statistics than programming, you might get answer with greater chance on: [cross validated](http://stats.stackexchange.com/). If the data generation and models are correct (could be checked in textbooks) then you should probably accept that the likelihood function is flat, and construct confidence interval for the model parameters accordingly. For log-likelihood optimization I have good experience with `fminunc` and quasi-newton solver and `fminsearch` Nelder-Mead simplex solver (even with constraints on par). A small reproducible example would be good. – rozsasarpi Feb 02 '15 at 11:50
  • Seconding Arpi: A flat likelihood function simply means that the data do not sufficiently constrain the model parameter. It is possible you don't have enough data, or that the model is overparametrized. – A. Donda Feb 02 '15 at 14:48
  • Thank you Arpi and A. Donda. I have also found fminsearch to give better results than fmincon, when the constraint is not too close. – Futurist Feb 06 '15 at 16:45

0 Answers0