I am currently developing a hierarchical bayesian model in Openbugs that involves a lot (about 6000 sites) of binomial processes. It describes successive removal electric fishing events/pass and the general structure is as follow:
N_tot[i]<-d[i] * S[i]
d[i]~dgamma(0.01,0.01)
for (i in 1:n_sites){
for (j in 1:n_pass[i]){
logit(p[i,j])~dnorm(0,0.001)
N[i,j] <- N_tot[i] - sum( C[i,1:(j-1)] )
C[i,j] ~ dbin( p[i,j] , N[i,j] )
}
}
where n_sites is the total number of sites i'm looking at. n_pass[i] is the number of fishing pass carried out in site i. N[i,j] is the number of fish in site i when doing fish pass j. N_tot[i] is the total number of fish in site i before any fish pass and it is the product of the density at the site d[i] times the surface of the site S[i] (the surface is known). C[i,j] is the number of fish caught in site i during fish pass j. p[i,j] is the probability of capture in site i for fish pass j.
Each sites as on average 3 fishing pass which is a lot of successive binomial process which typically takes a lot of time to compute/converge. I can't approximate the binomial process because the catches are typically small.
So I'm a bit stuck and i'm looking for suggestions/alternatives to deal with this issue.
Thanks in advance
edit history: 15-11-2016: added prior definitions for d and p following on @M_Fidino request for clarification