0

Below (in R code), I'm showing a simple Bayesian R code in which I use ddistribution.name() commands (i.e., dcauchy(), dt()) to estimate what I call "x" in my code. I then plot the answers.

Coding Question:

I was wondering if I could achieve an approximately similar answer using rdistribution.name() commands (e.g., rcauchy(), rt()) to estimate what I call "x" in my code?

(P.S.: Using rdistribution.name() commands, I should then be able to histogram the answers.)

     prior = function(x) dcauchy(x)
likelihood = function(x) dt(1.46, 19, x*sqrt(20))
 posterior = function(x) prior(x)*likelihood(x)  

 plot(x <- seq(-5, 5, len = 1e4), posterior(x), col = 3)
Community
  • 1
  • 1
rnorouzian
  • 7,397
  • 5
  • 27
  • 72
  • I'm not sure what you are trying to do, but simply replacing `d_()` by `r_()` doesn't seem like it would make much sense. You can of course create simulations which illustrate Bayes' theorem, but I don't see how such simulations would be obtained by simply replacing `d_()` by `r_()` in a formula for a posterior distribution. – John Coleman Jun 22 '17 at 16:15
  • @JohnColeman, right that's what I'm trying to understand how the integral approach above could be turned into a corresponding simulation approach? – rnorouzian Jun 22 '17 at 16:25
  • The `Metropolis Algorithm` (https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm) is a good way to generate numbers which follow a posterior distribution. The book "Doing Bayesian Data Analysis" by John Krushke has a nice introduction to it. The core algorithm is easy enough to code directly in R, though I am sure that various packages already implement it. This might be overkill for a 1-dimensional case, but is nevertheless a natural way to proceed. The nice thing about the algorithm is that you don't need the denominator in Bayes' theorem,`prior*lilelihood` suffices. – John Coleman Jun 22 '17 at 16:54

0 Answers0