we have a OpenCPU docker container running to process our models in R. It never needs to shut down. We have put every dependency we could think of in the preload section of opencpu-server's config.
Usually the models take less than 2 seconds to complete which is fine. However after some inactivity on de OpenCPU container the first request to OpenCPU takes more than 90 seconds to complete (looking at the config file that might be the http.post limit). But I know it can run within 2 seconds. So what's going on?
The apache logs show no errors; OpenCPU logs also nothing out of the ordinary. I've adjusted the apache to record the request times as well and apache confirms it actually took those 90 seconds to respond; eliminating any docker-related stuff.
Is OpenCPU being shutdown (or does Apache) after x period of inactivity? And if so why it takes so much time to get running again or how can we disable this?
Any ideas on how to solve/debug this?
In answer to Jeroen: The problem appeared on the first package that we ran with a data-folder of size around 8MB, instead of 20KB. But within R, the code always runs within 0,006 seconds. Another reason that we do not think the R-code is the problem, is that running the exact same code for the exact same input data runs within normal time in other cases. Only the first try after a while is this slow.
Another strange thing is that even though OpenCPU runs for 90 seconds, the result of the prediction is just coming back. No errors. When we measured the running time in R ( with Sys.time() ), the R-code runs for around 5-6 seconds, instead of 0,006s. Even though this is very slow, it does not explain the 85 seconds difference.
Also restarting OpenCPU/Apache within the container itself seems to resolve the issue temporarily, but comes back after a period of inactivity.
Furthermore, to answer the question by MrSmith, the code is not running through a database. It is just a script that predicts a logistic regression model with an input frame.