1

I have noticed that when I parallelize code in R 3.5.1 functions take up a lot more RAM in comparision to when running on one processor. Is there a way to release RAM?

Subquestion: Increasing memory limit above my RAM in 64-bit R seems to have no effect while parallelizing. Is there a way to set it higher, if it is possible?

library(doParallel)
cl <- makeCluster(detectCores(), type='PSOCK')
registerDoParallel(cl)
somefunction(x)  
gc(reset=T) ## does nothing 
function2(y) ## rans out of RAM, cause R holds onto variables from somefunction()
registerDoParallel()
JacobJacox
  • 917
  • 5
  • 14
  • 1
    From what I understand the release works fine - one runs however into other problems such as memory fragmentation so that the freed memory can not be used. I found it more efficient to do the parallelisation outside of R. I wrote an example using make [there](https://codereview.stackexchange.com/questions/115437/using-doparallel-to-cycle-through-rds-files/115828#115828) if you are interested. – bdecaf Sep 19 '18 at 06:55
  • I like it and I will use it for sure, thank you! @bdecaf – JacobJacox Sep 19 '18 at 07:15
  • @bdecaf I keep getting error: make: nothing to be done for all. I used tab instead of white space.. Do you know what am i doing wrong? – JacobJacox Sep 19 '18 at 15:51
  • 1
    it seems it is complicated if it is not tabbed - however possible: https://stackoverflow.com/questions/2131213/can-you-make-valid-makefiles-without-tab-characters However you can use any parallel tool in the shell to get this speedup - make was just an example that works for me - maybe some other tool is better for your case - have a look at this https://www.codeword.xyz/2015/09/02/three-ways-to-script-processes-in-parallel/ – bdecaf Sep 19 '18 at 17:39

0 Answers0