0

All, I'd like to estimate the parameter P ,Q whose prior distribution is

P ~ N (A,B)

Q ~ N (C,D)

Then, I find the full conditional distribution of P,Q are

P|Q ~ N (A*,B*)

Q|P ~ N (C*,D*)

where A* is a function of A , B , Q [A*=f(A,B,Q)]

B* is a function of A , B , Q [B*=f(A,B,Q)]

Hence, in Gibbs updating step,

  1. [First iteration]

    • update P_0 to P_1 (had information of A,B,Q_0 & get A* _1, B* _1)
    • update Q_0 to Q_1 (had information of C,D,P_0 & get C* _1, D* _1) (the subscript denote the sample from nth iteration; 0 is initial value)
  2. [Second iteration] my question is: am I going to

    • update P_1 to P_2 (using information of A , B , Q_1 to get A* _2 , B* _2) OR
    • update P_1 to P_2 (using information of A* _1,B* _1,Q_1 to get A* _2, B* _2)

In other words, do we use the same prior in every Gibbs step or we use previous step estimation result as our prior? I knew one of the concept of Gibbs is to update each parameter isolate, so I am going to use the information of Q_1 to update P_2. How about prior?

Ying
  • 1,894
  • 1
  • 14
  • 20

1 Answers1

1

Use the same prior at every step. Do not alter the prior parameters after you have begun or you will not be performing Bayesian inference.

Alex
  • 257
  • 1
  • 8
  • 1
    Thanks for answering! **Does that mean I have to really carefully assign the parameters of prior initially**? I face a situation in my simulation: If I have a bad guess for the parameters of prior such as A and B here, I cannot get a good simulation result for posterior mean A*. Although I will update the posterior mean A* as well as Q in each iteration ( A*=f(A,B,Q) ), it is related to the prior parameter A,B. Do you mean that: **I won't be able to get rid of my bad guess information of prior parameter A,B once my simulation begins?** – Ying Oct 06 '16 at 17:45
  • Yes that's correct, you should choose the prior paramters carefully. You won't be able to get rid of your bad guess once the simulation begins. – Alex Oct 09 '16 at 18:20
  • If you do not have a good idea of the parameters before you begin then you can choose uninformative priors (https://en.wikipedia.org/wiki/Prior_probability#Uninformative_priors) for A and B. To quote the article I linked to "The simplest and oldest rule for determining a non-informative prior is the principle of indifference, which assigns equal probabilities to all possibilities." so set A and B so that each value of A*, B* is equally likely, or at least as close to this as possible. – Alex Oct 09 '16 at 18:24