I'm attempting to implement a model in stan by directly incrementing the log probability using the 'target += bernoulli_logit_lmpf(y|alpha)' functionality. The full model is below:
data {
int<lower=0> Ntotal;
int y[Ntotal];
int<lower=0> Nsubj;
int<lower=0> s[Ntotal];
int<lower=0> Nstages;
int<lower=0> Nstates;
vector[Nstages*Nstates] log_ts;
vector[Nstages*Nstates] log_ds;
int<lower=0> a_i[Ntotal];
int<lower=0> b_i[Ntotal];
}
parameters {
real log_w1;
real log_w3;
real delta;
real delta_p;
real tau_p;
}
transformed parameters {
vector[Nstates*Nstages] gradient;
gradient = exp(log_ds*delta_p + log_ts*tau_p + log_w3) + exp(log_ds*delta +
log_w1);
}
model {
log_w1 ~ normal(0,5);
log_w3 ~ normal(0,5);
delta ~ normal(0,2);
delta_p ~ normal(0,2);
tau_p ~ normal(0,2);
for(n in 1:Ntotal){
target += bernoulli_logit_lpmf(y[n] | (gradient[a_i[n]] -
gradient[b_i[n]]) );
}
}
My understanding is that this should produce the same results as the sampling statement below:
model {
log_w1 ~ normal(0,5);
log_w3 ~ normal(0,5);
delta ~ normal(0,2);
delta_p ~ normal(0,2);
tau_p ~ normal(0,2);
y ~ bernoulli_logit(gradient[a_i] - gradient[b_i]);
}
However, the two models produce quite different results (even with the same seed in R and Stan). I thought the problem could have been with how I implemented the direct increment, so I tried the following model as well.
model {
real lps[Ntotal];
log_w1 ~ normal(0,5);
log_w3 ~ normal(0,5);
delta ~ normal(0,2);
delta_p ~ normal(0,2);
tau_p ~ normal(0,2);
for(n in 1:Ntotal){
lps[n] = bernoulli_logit_lpmf(y[n] | (gradient[a_i[n]] - gradient[b_i[n]]));
}
target += log_sum_exp(lps);
}
However, all three models produce different results. Interestingly, the direct increment runs about twice as fast as the sampling statement, but appears to not be giving the correct results. I need to use the direct increment method because I will eventually transform this model into a mixture model, and as far as I know, mixture models need the log probability to be updated directly. Any help with this would be much appreciated!
Regards, Tim