1

So I have created a function that generates a data frame for gamma distribution.

Currently, I have.

sample_gamma <- function(alpha,beta,n,iter) {
gamma.df <- as.data.frame(matrix(nrow = iter, ncol = 3))
colnames(gamma.df) <- c("iteration","mean","standard dev")
gamma.df$iteration <- c(1:iter)
  for (i in 1:iter) {
    gamma.dist <- rgamma(n,shape = alpha, rate = beta, scale = 1/beta)
    gamma.df[i,2] <- mean(gamma.dist)
    gamma.df[i,3] <- sd(gamma.dist)
  }
print(gamma.df)
}

The function does everything I need it to do but I was wondering if there were any alternate or cleaner ways to do it

Gian Batayola
  • 73
  • 1
  • 11
  • There must be a class that is studying this, as I found https://stackoverflow.com/q/58964331, https://stackoverflow.com/q/58955845, and https://stackoverflow.com/q/58955845 that are all generating a function named `sample_gamma` that includes the same formal arguments and close to the same output. – r2evans Nov 21 '19 at 05:53

2 Answers2

3

I would create a function which returns mean and sd for one iteration.

sample_gamma <- function(alpha,beta,n) {
    dist <- rgamma(n,shape = alpha, rate = beta)
    c(mean = mean(dist), sd = sd(dist))
}

and then repeat it using replicate

t(replicate(5, sample_gamma(2, 3, 4)))

#          mean        sd
#[1,] 0.5990206 0.2404226
#[2,] 0.6108976 0.3083426
#[3,] 1.0616542 0.4602403
#[4,] 0.3415355 0.1543885
#[5,] 1.0558066 0.9659599
Ronak Shah
  • 377,200
  • 20
  • 156
  • 213
2

While I think Ronak Shah's answer is simple and relatively idiomatic (R-wise), here's one that is a little more efficient when scaled to high iter counts (since it only does a single random-pull):

sample_gamma <- function(alpha, beta, n, iter) {
  mtx <- matrix(rgamma(n*iter, shape=alpha, rate=beta), nrow=n, ncol=iter)
  t(apply(mtx, 2, function(a) c(mean=mean(a), sd=sd(a))))
}
sample_gamma(2, 3, 4, 5)
#           mean         sd
# [1,] 0.6486220 0.22900833
# [2,] 0.8551055 0.07874287
# [3,] 0.7854750 0.72694260
# [4,] 0.7045878 0.24834502
# [5,] 1.1783301 0.25210538

Benchmarking:

microbenchmark::microbenchmark(
  RS=t(replicate(5, sample_gamma_RS(2,3,4))),
  r2=sample_gamma_r2(2,3,4,5)
)
# Unit: microseconds
#  expr   min     lq    mean median    uq    max neval
#    RS 413.7 493.70 757.884 743.80 946.1 1611.6   100
#    r2 405.2 461.15 681.630 706.35 898.6 1348.2   100

microbenchmark::microbenchmark(
  RS=t(replicate(500, sample_gamma_RS(2,3,4))),
  r2=sample_gamma_r2(2,3,4,500)
)
# Unit: milliseconds
#  expr    min       lq     mean   median       uq      max neval
#    RS 31.271 40.58735 56.44298 57.85735 65.08605  95.1866   100
#    r2 29.110 38.81230 53.99426 57.45820 61.35720 100.5820   100

microbenchmark::microbenchmark(
  RS=t(replicate(500, sample_gamma_RS(2,3,400))),
  r2=sample_gamma_r2(2,3,400,500)
)
# Unit: milliseconds
#  expr     min       lq     mean   median       uq      max neval
#    RS 60.6782 101.3112 121.3533 116.7464 140.8845 227.1904   100
#    r2 66.3892  81.0329 106.9920  98.7170 126.7742 198.3947   100

I confess I thought it would be a more dramatic difference in performance.

r2evans
  • 141,215
  • 6
  • 77
  • 149
  • 1
    Nice approach as well. If you want to calculate column-wise mean and sd you can use `base::colMeans` and `matrixStats::colSds`, which should improve the performance. – Ronak Shah Nov 21 '19 at 06:22
  • 1
    True, those functions might squeeze a little more speed out of it. Somehow I'm not certain the class studying this `sample_gamma` is worried about large-sample performance, but it was an entertaining exercise. :-) – r2evans Nov 21 '19 at 06:25