0

I am currently developing a hierarchical bayesian model in Openbugs that involves a lot (about 6000 sites) of binomial processes. It describes successive removal electric fishing events/pass and the general structure is as follow:

N_tot[i]<-d[i] * S[i]
d[i]~dgamma(0.01,0.01)

for (i in 1:n_sites){
    for (j in 1:n_pass[i]){
        logit(p[i,j])~dnorm(0,0.001)
        N[i,j] <- N_tot[i] - sum( C[i,1:(j-1)] )
        C[i,j] ~ dbin( p[i,j] , N[i,j] )
    }
}

where n_sites is the total number of sites i'm looking at. n_pass[i] is the number of fishing pass carried out in site i. N[i,j] is the number of fish in site i when doing fish pass j. N_tot[i] is the total number of fish in site i before any fish pass and it is the product of the density at the site d[i] times the surface of the site S[i] (the surface is known). C[i,j] is the number of fish caught in site i during fish pass j. p[i,j] is the probability of capture in site i for fish pass j.

Each sites as on average 3 fishing pass which is a lot of successive binomial process which typically takes a lot of time to compute/converge. I can't approximate the binomial process because the catches are typically small.

So I'm a bit stuck and i'm looking for suggestions/alternatives to deal with this issue.

Thanks in advance

edit history: 15-11-2016: added prior definitions for d and p following on @M_Fidino request for clarification

  • Can you calculate `N` outside of the model instead of within? – mfidino Nov 08 '16 at 15:12
  • the whole point of the model is trying to estimate N and factors affecting the probability of capture – Guillaume Dauphin Nov 08 '16 at 20:00
  • Then which of these are data included within the model and which are parameters you wish you to estimate? Without including additional information its really difficult to make any suggestions. – mfidino Nov 09 '16 at 15:00
  • @M_Fidino C[i,j] is the catch data unknowns to estimate = N_tot[i] , p[i,j] ,N[i,j] where N_tot[i] = d[i] * S[i] the total abundance is equal to the density times the surface S[i] priors: logit(p[i,j]) ~ dnorm(0,0.001) d[i] ~ dgamma(0.001,0.001) The actual model is a bit more complex with multiple hierachical levels on the density and the probability of capturebut even w/o that it is still slow to converge I'm interested in knowing if other people using BUGS run into these issues and how they deal with it. – Guillaume Dauphin Nov 15 '16 at 12:57
  • This will likely only offer VERY small gains but you could remove `N` from the model `C[i,j] ~ dbin( p[i,j] , (N_tot[i] - sum( C[i,1:(j-1)] )))`. That should be a little more efficient. Furthermore, as you are supplying `C` as data you could do some of the summations outside of the model instead of within. That way you are not using `sum` a whole bunch of times. – mfidino Nov 15 '16 at 14:53

0 Answers0