In the context of teaching R programming, I am trying to run R scripts completely independently, so that I can compare the objects they have generated.
Currently, I do this with R environments:
student_env <- new.env()
solution_env <- new.env()
eval(parse(text = "x <- 4"), env = student_env)
eval(parse(text = "x <- 5"), env = solution_env)
student_env$x == student_env$y
While this provides some encapsulation, is is by far complete. E.g., if I execute a library()
call in the student environment, it is attached to the global R session's search path, making the package available for code running in solution environment as well.
To ensure complete separation, I could fire up subprocesses using the subprocess
package:
library(subprocess)
rbin <- file.path(R.home("bin"), "R")
student_handle <- spawn_process(rbin, c('--no-save'))
solution_handle <- spawn_process(rbin, c('--no-save'))
process_write(student_handle, "x <- 4\n")
process_write(solution_handle, "x <- 5\n")
However, I'm not sure how to go about the step of fetching the R objects so I can compare them.
My questions:
- Is
subprocess
a good approach? - If yes, how can I (efficiently!) grab the R representations of objects from a subprocess so I can compare the objects in the parent process? Python does this through pickling/dilling.
- I could communicate through .rds files, but this is unnecessary file creation/reading.
- In R, I came across
RProtoBuf
, but I'm not sure if it solves my problem.
- If no, are there other approaches I should consider? I've looked into
opencpu
, but the concept of firing up a local server and then use R to talk to that server and get representations feels like too complex an approach.
Thanks!