I am new to R as well as Bayesian Statistics. I am going through the problem set in Chapter#12 of Students Guide to Bayesian Statistics (this link has problem as well as answer plot).
In it Problem 12.4.3, author has provided a graph as the error vs number of samples.
Consider a type of coin for which the result of the next throw (heads or tails) can depend on the result of the current throw. In particular, if a heads is thrown then the probability of obtaining a heads on the next throw is ; if instead a tails is thrown then the probability of obtaining a tails on the next throw is . To start, we assume 0 ≤ ∈ ≤. The random variable X takes the value 0 if the coin lands tails up or 1 if it lands heads up on a given throw.
Problem 12.4.3 As ∈ increases, how does the error in estimating the mean change, and why?
I am getting a straight line with no difference when sampling size is increased.
What am I missing? My R code:
epsilon <- seq(from = 0, to = 0.5, length.out = 10 )
first_throw <- rbinom(n=1, size=1, prob = 1/2)
cat("\nFirst Throw: ",first_throw)
last_throw <- first_throw
for ( s in c(10,20,100)){
for (ep in epsilon) {
j <- 1
curr_err <- 0
if(last_throw == 1){
last_throw <- rbinom(n=1, size = 1, prob=1/2 + ep)
curr_err <- abs(mean(replicate(1000, mean(rbinom(n=s, size = 1, prob=1/2 + ep)))) - 0.5)
}
else{
last_throw <- rbinom(n=1, size = 1, prob=1/2 - ep)
curr_err <- abs(mean(replicate(1000, mean(rbinom(n=s, size = 1, prob=1/2 - ep)))) - 0.5)
}
lerrors [j] <- curr_err
j <- j + 1
}
cat("\n epsilon: ", epsilon)
cat("\n lerrors: ", lerrors)
plot(epsilon,lerrors, col="blue")
lines(epsilon, lerrors, col="blue")
}