0

I have some problem of computation time when running a latent class mixed model in R (lcmm). Since R only uses one core by default, I would like to use more to optimize computation time.

I noticed that it is possible to do so with "foreach" loop, and "apply" approach, with the help of "parallel" and "multicore" package. However, it is unclear for me considering the model, such as : lm, glm, lme4, and the one I use : lcmm.

I already saw in this website that the function speedglm allows to save a lot of time for glm model. Still, to my knowledge, such function does not exist for lcmm. Here is the code I use for my model :

    lcmm(H~time,random=~time,subject='ID',mixture=~time,ng=3,maxiter=400,convB = 1e-02, convL = 1e-02, convG = 1e-02,nwg=T,data=Base_conv,link="linear").       

Is there a possibility to maximise the number of cores used by the computer ? Or anything to optimize computation time ?

I thank you in advance.

Marc

MrRonsard
  • 1
  • 2
  • I'm not aware of how *lcmm* works but the authors of the package [in their companion paper in Journal of Statistical Software](https://arxiv.org/pdf/1503.00890.pdf) (page 52) mention that they intend to include parallel computations in a future version. You could probably follow the [NEWS](https://cran.r-project.org/web/packages/lcmm/NEWS) of the package. – lampros Oct 16 '17 at 19:41
  • Yes, thank you lampros for your answer. So I have to answer for a specific improvement of lcmm. I should then focus on R software itself. – MrRonsard Oct 17 '17 at 08:01

0 Answers0