1

I am trying to develop a OxMetrics code for a multinomial (or conditional) logit model (as described in http://data.princeton.edu/wws509/notes/c6s2.html). I am new to OxMetrics and still struggle to understand how the optimisation routine (MaxBFGS) works.

Any help is more than welcome!

// FUNCTION FOR MULTINOMIAL LOGIT MODELLING
//X = Independent variables (e.g. product features)
//Y = Dependent variable (i.e. discrete choices (0/1))
//N = Total number of individuals (N=500)
//T = Number of observations (choice tasks) per respondent (T=20)
//J = Number of products per task (J=2 => choice between two options)
//sv => starting values
//llik => model log-likelihood

LOGIT(const sv, const llik, X, Y, N, T, J)
{
decl num, den, prbj, prbi, maty;
num = reshape(exp(X*sv[0:6]), N*T, J);
den = sumr(num);
prbj = num ./ den;
maty = reshape(Y .== 1, N*T, J);
prbi = sumr(prbj .* maty);
llik[0] = -sumc(log(prbi));
return llik[0];
}

main()
{
decl data, N, T, J, X, Y, sv, dFunc, fit;
data = loadmat("C:/Users/.../Data/data.csv");
X = data[][33] ~ data[][5:10];
Y = data[][12];
N = 500;
T = 20;
J = 2;
sv = <0;0;0;0;0;0;0>;
println ("\nEstimating using MaxBFGS");
fit = MaxBFGS(LOGIT, X=X, Y=Y, N=N, T=T, J=J, &sv, &dFunc, 0, TRUE);
println (MaxConvergenceMsg(fit), " at parameters ", sv', "with function value ", double(dFunc));
}
Umka
  • 45
  • 6

2 Answers2

1

Thanks a lot, your answer was very helpfull (somehow my brain struggles to switch from R to Ox) Not sure whether this is the right place to post a Ox for mixed logit (or random parameters logit, or kernel density), but it might be usefull to others. Of course, ideas/thoughts/tricks to improve the following code are very welcome!

DECLARE FORMAT OF THE DATASET

decl N = 433;     // Number of respondents
decl R = 100;     // Number of draws
decl T = 8;  // Number of tasks per respondent (!!! same for all respondents)
decl J = 2;   // Number of options per task (!!! same for all tasks)
decl Y = <0>;      // choice variable
decl X = <1;2;3;4;5;6>; // 6 param, including both fixed effects + mean of random effects
decl RC = <2;3;4;5;6>; // 5 param corresponding to SD of random effects
decl H, mY, mX, mRC;

FUNCTION TO COMPUTE MIXED LOGIT PROBABILITIES

fn_MXL(const beta, const obj, const grad, const hess) 
{
decl i, seqprb, num, den, sllik;
seqprb = zeros(N,R);
for(i = 0; i < R; ++i)
{
num = reshape(exp(mX * beta[0:5][] + mRC .* (H[N*T*J*i:(N*T*J*(i+1))-1][]) * beta[6:][]), N*T, J); // !!! Edit "beta" accordingly
den = sumr(num);
seqprb[][i] = prodr(reshape(sumr(num ./ den .* mY), N, T));
}
sllik = sumc(log(meanr(seqprb)));
obj[0] = double(sllik);
return 1;
}

MODEL CODING/ESTIMATION

main()
{
decl data, i, id1, id2, Indiv, prime, Halton, sv, ir, obj;
// 1. Select variables
data = loadmat("C:/Users/.../choice_data.xls");
mY = reshape(data[][Y], N*T, J);
mX = data[][X];
mRC = data[][RC];
delete data;
// 2. Generate halton draws
id1 = id2 = zeros(N,R); // Create ID variables for both respondents and draws to re-order the draws
for(i = 0; i < N; ++i)
{
id1[i][] = range(1,R); // ID for draws
id2[i][] = i + 1; // ID for respondents
}
Indiv = reshape(id1, N*R, 1) ~ reshape(id2, N*R, 1);
Halton = zeros(N*R, sizeof(RC));
prime = <2;3;5;7;11;13;17;19;23;29;31;37;41;43;47;53;59;61;67;71;73;79;83;89;97;101;103;107;109;113>; // List of prime numbers
for(i = 0; i < sizeof(RC); ++i)
{
Halton[][i] = haltonsequence(prime[i], N*R);    // prime number ; number of draws
}
Halton = Indiv ~ Halton;
H = zeros(rows(Halton)*T*J,columns(Halton));     // Duplicate draws (T*J) times to match structure of "mX" (!!! possible to run out of memory)
for(i = 0; i < (T*J); ++i)
{
H[rows(Halton)*i:(rows(Halton)*(i+1)-1)][] = Halton;
}
H = sortbyc(H, <0;1>)[][2:]; // Draws are organised as following: Draw > Indiv (e.g. first 5 rows would correspond to 1st draw for first 5 indiv)
H = quann(H); // Transform halton draws into normal draws
// 3. Estimation
sv = <0;0;0;0;0;0;0;0;0;0;0>;      // Starting values for X and RC
MaxControl(-1, 1, 1);
ir = MaxBFGS(fn_MXL, &sv, &obj, 0, TRUE);
print("\n", MaxConvergenceMsg(ir), " using numerical derivatives", "\nFunction value = ", obj, "; parameters:", sv);
}
Umka
  • 45
  • 6
  • Nice implementation, and thks for sharing it. A small idea to improve it : you can encapsulate your code in a class to avoid global variables as shown in `probit3.ox` . Also you should adopt a specific Naming convention (https://en.wikipedia.org/wiki/Naming_convention_%28programming%29 ) exemple : since the variable `Halton` is a matrix, you can use `mHalton` where `m` stands for matrix. Idem use `iN` instead of `N` where `i` stands for integer, `v` for vector, `d` for double .... It makes the code much more readable. – Malick May 19 '16 at 03:03
0

You should read the official MaxBFGS help (press "F1" on the function ) or here.

This function is declared as :

MaxBFGS(const func, const avP, const adFunc, const amInvHess, const fNumDer);
  • argument func : function to maximize
  • argument avP : input--> matrix of starting values, output -> final coefficients. So you should first store all your arguments that has to be optimized in a single matrix (your sv variable) .
  • argument amInvHess : set to 0 to simple estimation.
  • argument fNumDer : set to 0:for analytical first derivatives and set to 1 to use numerical first derivatives.

The function that has to be optimized must have the following prototype :

LOGIT(const vP, const adFunc, const avScore, const amHessian)

As a simple introduction you can have a look to the the following example (OxMetrics7\ox\samples\maximize\probit1.ox) to estimate a probit model via MaxBFGS - it is documented in the official documentation "Introduction to Ox -Jurgen A. Doornik and Marius Ooms -2006").

// file : ...\OxMetrics7\ox\samples\maximize\probit1.ox
#include <oxstd.oxh>
#import <maximize>

decl g_mY;                                  // global data
decl g_mX;                                  // global data


///////////////////////////////////////////////////////////////////
// Possible improvements:
// 1) Use log(tailn(g_mX * vP)) instead of log(1-prob)
//    in fProbit. This is slower, but a bit more accurate.
// 2) Use analytical first derivatives. This is a bit more robust.
//    Add numerical computation of standard errors.
// 3) Use analytical second derivatives and Newton instead
//    of Quasi-Newton (BFGS). Because the log-likelihood is concave,
//    we don't really need the robustness of BFGS.
// 4) Encapsulate in a class to avoid global variables.
// 5) General starting value routine, etc. etc.
//
// probit2.ox implements (2)
// probit3.ox implements (4)
// probit4.ox implements (4) in a more general way
///////////////////////////////////////////////////////////////////

fProbit(const vP, const adFunc, const avScore,
    const amHessian)
{
    decl prob = probn(g_mX * vP);   // vP is column vector

    adFunc[0] = double(
        meanc(g_mY .* log(prob) + (1-g_mY) .* log(1-prob)));

return 1;                           // 1 indicates success
}

main()
{
    decl vp, dfunc, ir;

    print("Probit example 1, run on ", date(), ".\n\n");

    decl mx = loadmat("data/finney.in7");

    g_mY = mx[][0];       // dependent variable: 0,1 dummy
    g_mX = 1 ~ mx[][3:4]; // regressors: 1, Lrate, Lvolume
    delete mx;

    vp = <-0.465; 0.842; 1.439>;        // starting values

    MaxControl(-1, 1, 1);          // print each iteration
                                               // maximize
    ir = MaxBFGS(fProbit, &vp, &dfunc, 0, TRUE);

    print("\n", MaxConvergenceMsg(ir),
        " using numerical derivatives",
        "\nFunction value = ", dfunc * rows(g_mY),
        "; parameters:", vp);
}

Ps: you can use global variables for your X,Y,N,T,J.. variables and then improve your code (see probit2.ox, probit3.ox...)

Malick
  • 6,252
  • 2
  • 46
  • 59