2

I am trying to convert a for loop which I am currently using to run a process across a large matrix. The current for loop finds the maximum value within a 30 x 30 section and creates a new matrix with the maximum value.

The current code for the for loop looks like this:

mat <- as.matrix(CHM) # CHM is the original raster image
maxm <- matrix(nrow=nrow(mat)/30, ncol=ncol(mat)/30) # create new matrix with new dimensions

for(i in 1:dim(maxm)[1]) {
  for(j in 1:dim(maxm)[2]) {
    row <- 30 * (i - 1) + 1
    col <- 30 * (j - 1) + 1
    maxm[i,j] <- max(CHM[row:(row + 29), col:(col + 29)])
   }
 }

I want to convert this to a foreach loop to use parallel processing. I've got as far as producing the following code but this dosent work. I'm not sure how to produce the new matrix within the foreach loop:

ro<-nrow(mat)/30
co<-ncol(mat)/30
maxm <- matrix(nrow=nrow(mat)/30, ncol=ncol(mat)/30)

foreach(i=ro, .combine='cbind') %:%
  foreach(j=co, .combine='c') %dopar% {
    row <- 30 * (i - 1) + 1
    col <- 30 * (j - 1) + 1
    maxm[i,j]<-(max(CHM[row:(row + 29), col:(col + 29)]))
  }

Any suggestions please!

  • maxm[i,j]<-(max(CHM[row:(row + 29), col:(col + 29)])) try to change <- to <<-. – Dmitriy May 13 '19 at 10:29
  • Error in { : task 1 failed - "object 'maxm' not found". I don't think it likes assigning values to the matrix within the loop – chrischandler May 13 '19 at 10:37
  • Ahh, yes, kk, it won't be worked. The main problem with your code is the storing of your calculation result. I often use parLapply/parSapply instead of foreach-dopar, but, probably, you should write the code like this: maxm = foreach(i=ro, .combine='cbind') %:% { result = foreach(j = co, .combine='c') %dopar% { row <- 30 * (i - 1) + 1; col <- 30 * (j - 1) + 1; (max(CHM[row:(row + 29), col:(col + 29)])) } result; } And also, probably, you want to have row binding for your matrix - so, change 'cbind' to the 'rbind' in your first 'foreach'. – Dmitriy May 13 '19 at 11:10
  • Have a look at https://privefl.github.io/blog/a-guide-to-parallelism-in-r/#filling-something-in-parallel. – F. Privé May 13 '19 at 11:36
  • http://blog.aicry.com/r-parallel-computing-in-5-minutes/index.html Too – Dmitriy May 13 '19 at 11:45

2 Answers2

3

Prior to performing any action in parallel, one should try to see if any vectorizing is possible. And once that is done question 'is parallelization reasonable?'

In this specific example, parallelization is unlikely to be as fast as you expect, as at each iteration you are saving your output into a common object. R does not commonly support this in parallelization, and instead one should seek parallelization in the so called 'embarrassingly parallel-able' problems, until one gets a better understanding of how parallel problems work. In short: Don't perform parallel changes to data in R, unless you know what you're doing. It is unlikely to be faster.

That said in your case it actually becomes quite tricky. You seem to be performing a 'rolling-max window', and the output should be saved in a combined matrix. An alternative method to saving the data directly int othe matrix, is to return a matrix with 3 columns x, i, j, where the latter two are indices that indicate which row/column the value of x should be placed in.

In order for this to work, as Dmitriy noted in his answer, the data needs to be exported to each cluster (parallel session), such that we can use it. Afterwards the following example shows how one can perform the parallization

First: Create a cluster and export the dataset

set.seed(1)
#Generate test example
n <- 3000
dat <- matrix(runif(n^2), ncol = n)
library(foreach)
library(doParallel)
#Create cluster
cl <- parallel::makeCluster(parallel::detectCores())
#Register it for the foreach loop
doParallel::registerDoParallel(cl)
#Export the dataset (could be done directly in the foreach, but this is more explicit)
parallel::clusterExport(cl, "dat")

Next we come to the foreach loop. Note that according to the documentation, nested foreach loops should be seperated using the %:% tag, as shown in my example below:

output <- foreach(i = 1:(nrow(dat)/30), .combine = rbind, .inorder = FALSE) %:% 
    foreach(j = 1:(ncol(dat)/30), .combine = rbind, .inorder = FALSE) %dopar%{
        row <- 30 * (i - 1) + 1
        col <- 30 * (j - 1) + 1
        c(x = max(dat[row:(row + 29), col:(col + 29)]), i = i, j = j)
    }

Note the .inorder = FALSE. As i return the indices i dont care about order, only about speed. Last but not least, we need to create the matrix. The Matrix package function Matrix::SparseMatrix allows for specifying values and indices.

output <- Matrix::sparseMatrix(output[,"i"], output[,"j"], x = output[,"x"])

This is still rather slow. For n = 3000 it took roughly 6 seconds to perform calculations + a not-insignificant overhead from exporting the data. But it is likely faster than the same method using sequential loops.

Oliver
  • 8,169
  • 3
  • 15
  • 37
  • If I'm not mistaken, foreach "capture the context" of the method it's called from (copying the current environment to the cluster's nodes). So, you really do not need to clusterExport for :dat" in the your code. – Dmitriy May 13 '19 at 12:45
  • If you’ve done any parallel computing, particularly on a cluster, you may wonder why I didn’t have to do anything special to handle x and y. The reason is that the %dopar% function noticed that those variables were referenced, and that they were defined in the current environment. In that case %dopar% will automatically export them to the parallel execution workers once, and use them for all of the expression evaluations for that foreach execution... – Dmitriy May 13 '19 at 12:48
  • ...That is true for functions that are defined in the current environment as well, but in this case, the function is defined in a package, so we had to specify the package to load with the packages option instead. (https://cran.r-project.org/web/packages/foreach/vignettes/foreach.pdf) – Dmitriy May 13 '19 at 12:49
  • 1
    I am well aware of `foreach`'s automatic detection of necessary variables. Yet there are cases where this might not work out as intended, in which case the `.export` argument or `clusterExport` ensures they are properly handled. This is also the case in the `futures` package. I have yet to have them fail, but it never hurts to further ensure that the necessary variables are exported. – Oliver May 13 '19 at 12:52
  • It only works for the current environment, so, you still strongly need to pass any other variables/functions/other data which are not in your current context by clusterExport. Other side, your code also won't work properly if we will use it in the function body for the passed input variables - you should pass it through the temporary environment, which should be manually created in the function body just for data passing in the cluster's nodes... – Dmitriy May 13 '19 at 12:57
  • 1
    I'll note your answer is fully fine as well, either would do. The only thing I'd suggest changed would be the nested structure, which the `foreach` documents (`foreach(...) %:% foreach %dopar {...}`). All together either answer would likely have done. – Oliver May 13 '19 at 12:58
  • 1
    I've just noted this "headache" I got some years ago with my first steps in parallel computation in R. Your code is pretty good! ))) – Dmitriy May 13 '19 at 13:02
  • I tried using matrix function however i recieved error on the length of 'i'. Since the order is important: output<-foreach(i=1:(nrow(maxm)), .combine='rbind') %:% foreach(j=1:(ncol(maxm)), .combine='cbind') %dopar% { row <- 30 * (i - 1) + 1 col <- 30 * (j - 1) + 1 max(cmat[row:(row + 29), col:(col + 29)]) } seems to work – chrischandler May 13 '19 at 13:39
  • I think both solutions are correct however it seems important to use .combine='rbind' followed by cbind in order to get the correct matrix. – chrischandler May 13 '19 at 13:41
  • As long as it works. The solution suggested here simply avoids the need to order, as it is given by the `i` and `j` column of the output, and copy pasting the code works on my machine in a fresh R session. Without error messages i'm unaware of why it fails. :-) – Oliver May 13 '19 at 14:01
0

Let me try to get an answer here.

As I know, R use cluster system for parallel computation, each node works with an own environment. So, foreach-%dopar%, firstly, copy all current .globalEnv to the each cluster node and after that tried to run your code which is written in the cycle body. With no backcopy after code execution. You'll just get only a result by result = foreach(...) { }. So, the code maxm[i,j]<-(max(CHM[row:(row + 29), col:(col + 29)])) in the each node changes only local copy of your matrix, nothing more. So, the "correct" code, probably, will be like this:

mat <- as.matrix(CHM);
ro<-nrow(mat)/30;
co<-ncol(mat)/30;

maxm = foreach(i=1:ro, .combine='cbind') %:% 
{ 
   result = foreach(j = 1:co, .combine='c') %dopar% 
            { 
                row <- 30 * (i - 1) + 1; 
                col <- 30 * (j - 1) + 1; 
                max(CHM[row:(row + 29), col:(col + 29)]); 
            } 
   result; 
} 

Maybe it also be need to use as.matrix for maxm.

Dmitriy
  • 847
  • 17
  • 39