We are using officer to generate reports (about 100 pages with many graphs and tables) automatically. If we run the chapters separately, each file runs very fast.But when running all 12 files together this takes up to an hour. We assume that all results are stored in the working memory which causes the problem. Printing the document after each chapter or using rm()
to remove all objects that are no longer needed in between has no effect on the processing time.
Any ideas what can be done to "clear“ the working memory or speed up the process?
Here's an abstract of our code:
doc_output = file.path("C:/Doc/report.docx")
doc = read_docx(path = doc_template)
source(paste0("Chapter-1_", year, ".R"))
print(doc, target = doc_output)
rm(list = ls()[! ls() %in% c("year", "data_all", "data", "doc_template", "doc_output", "doc")])
gc()
source(paste0("Chapter-2_", year, ".R"))
print(doc, target = doc_output)
rm(list = ls()[! ls() %in% c("year", "data_all", "data", "doc_template", "doc_output", "doc")])
gc()
[...]
source(paste0("Chapter-11_", year, ".R"))
print(doc, target = doc_output)
rm(list = ls()[! ls() %in% c("year", "data_all", "data", "doc_template", "doc_output", "doc")])
gc()