1

I have 3 vectors called vp, va and vb:

vp contains 1.5 million random probabilities

va contains 1.5 million alpha values

vb contains 1.5 million beta values

I am trying to create an output that is a vector that looks like:

vanswer <- c(qbeta(vp,va,vb)) 

I know that this works, however the speed is tremndously slow with such large amounts of data and I am trying to find a way to speed it up. I have also tried doing the computation in a matrix with cbind() as well as using sapply(), but I cannot find the right way.

Any help would be helpful!

Prophet
  • 32,350
  • 22
  • 54
  • 79
  • are all the values unique? – Ben Bolker May 21 '14 at 20:57
  • ... otherwise I think you're going to have a very hard time getting this any faster, as vectorized `qbeta` is all executed in C code. – Ben Bolker May 21 '14 at 21:00
  • Not every Alpha and Beta are unique. All the probabilities are unique. What is C code? and what else could I do to make this faster? – user3662523 May 21 '14 at 21:07
  • 2
    someone in the R chat room suggested that you could parallelize this (i.e. run it on multiple cores, or multiple workstations: see `?clusterMap` in the `parallel` package that comes with R. – Ben Bolker May 22 '14 at 01:58

0 Answers0