0

I am trying to run R code with multisession parallelism such that all the error messages redirect to the same file. However, the sink() cannot be created.

library(parallel)
cl <- makePSOCKcluster(2)
f <- function(){
  withr::with_message_sink("messages.txt", Sys.sleep(10))
}
clusterCall(cl = cl, fun = f)

## Error in checkForRemoteErrors(lapply(cl, recvResult)) :
##   2 nodes produced errors; first error: Cannot establish message sink when another sink is active.
## Calls: clusterCall -> checkForRemoteErrors
## Execution halted

Edit

Given some of the responses, I should elaborate on the purpose of this post. I am developing drake, an R package with multiple parallel backends. Today, I implemented a new hook argument to make(), which just wraps individual parallel jobs in a function of the user's choice. What I am really looking for is a hook that silences the console regardless of parallel backend. Backends in the current development version include

  • parallel::mclapply()
  • parallel::parLapply()
  • base::lapply() (via parLapply() for one job)
  • make -j with a proper Makefile
  • future::sequential
  • future::multicore
  • future::multisession
  • future.batchtools backends listed here

I thought I found a hook that worked for stderr.

hook <- function(){
  withr::with_message_sink("messages.txt", Sys.sleep(10))
}

However, withr::with_message_sink() does not let me sink multiple workers to the same file for the parLapply() or future::multisession backends.

landau
  • 5,636
  • 1
  • 22
  • 50
  • I think I have had a similar need/problem/solution, but I need a bit more info. By 'multi-session' are we talking across multiple active-sessions on the same server, or simply multiple pid's? What's the main goal? Like in my case I needed to run multiple large jobs in parallel, and if any iteration throws an error I have a tryCatch log the error, but I don't stall anything out purposefully..just skip over – Carl Boneri Oct 26 '17 at 20:16
  • Good call. Please see my edit. My own needs are unusual. In your case, [development drake](https://github.com/wlandau-lilly/drake) might help. I just implemented a new [`diagnose()` function](https://github.com/wlandau-lilly/drake/issues/114) that retrieves verbose error information on failed targets, including the error message and call stack. – landau Oct 27 '17 at 03:30
  • Together, functions `drake::failed()` and `drake::diagnose()` remove the strict need for ordinary error messages printed to the console. I am planning a CRAN update in November with these new features. – landau Oct 27 '17 at 03:37
  • my gnarly brute-force suppression method lol: `shh <- function(...){ invisible( suppressWarnings( suppressPackageStartupMessages( suppressMessages( ... ) ) ) ) }` – Carl Boneri Oct 27 '17 at 04:54
  • Good idea. Works for `parLapply`, but `drake::make(drake::workflow(x = stop()), parallelism = "Makefile", jobs = 2, hook = ssh, verbose = FALSE)` is unfortunately not silent. – landau Oct 27 '17 at 11:27
  • 1
    I checked out the repo... Not sure if it's useful, but I put a script I use in the issues docket for your use if you so choose. Good work! – Carl Boneri Oct 27 '17 at 11:59

1 Answers1

1

Can you just use sink?:

library(parallel)
cl <- makePSOCKcluster(2)
clusterApply(cl, seq_along(cl), function(i) workerID <<- i)


f <- function(){
  outtxt <- paste(workerID, "messages.txt", sep="_")
  print(outtxt)
  sink(outtxt)
  Sys.sleep(10)
  sink()
}
clusterCall(cl = cl, fun = f)

stopCluster(cl)
  • I did use `outfile` at one point, but I was [persuaded otherwise](https://github.com/HenrikBengtsson/future/issues/171#issuecomment-339442682). Please see my edit. – landau Oct 27 '17 at 03:31
  • could you just use sink? – James Thomas Durant Oct 27 '17 at 15:09
  • For whatever reason, that did not work when I tried it before. But since you brought it up, I tried `make(..., hook = message_sink_hook)` with the [silencer hook here](https://github.com/wlandau-lilly/drake/blob/master/R/hooks.R#L32), and it seems to work everywhere. – landau Oct 27 '17 at 17:47