I'm using doopl.factory to solve multiple linear programs in a loop. I noticed a decreasing performance while looping through instances. Using memory_profiler shows that the memory increases after each call, which, eventually, leads to a very poor performance. It seems that doopl.factory.create_opl_model() and opl.run() somehow block memory that is not cleared with opl.end(). Is my analysis correct?
memory_profiler analysis screenshot
I set up a simple example to demonstrate the issue.
import doopl.factory, os, psutil
from memory_profiler import profile
@profile
def main():
dat = 'data.dat'
mod = 'model.mod'
print('memory before doopl: ' + str(psutil.Process(os.getpid()).memory_info().rss / 100000000) + ' GB')
with doopl.factory.create_opl_model(model=mod, data=dat) as opl:
try:
opl.mute()
opl.run()
opl.end() **# EDIT:** this is just to explicitly demonstrate with memory_profiler that opl.end() does not free all memory.
except:
'error'
print('memory after doopl: ' + str(psutil.Process(os.getpid()).memory_info().rss / 100000000) + ' GB')
if __name__ == "__main__":
main()
The data.dat file is empty and the model.mod file is as follows:
range X = 1..5;
dvar int+ y[X];
minimize sum(x in X) y[x];
subject to {
forall (x in X) {
y[x] <= 2;
};
};
Is there some way to fully clear memory after solving a model with doopl?