1

I'm using doopl.factory to solve multiple linear programs in a loop. I noticed a decreasing performance while looping through instances. Using memory_profiler shows that the memory increases after each call, which, eventually, leads to a very poor performance. It seems that doopl.factory.create_opl_model() and opl.run() somehow block memory that is not cleared with opl.end(). Is my analysis correct?

memory_profiler analysis screenshot

I set up a simple example to demonstrate the issue.

import doopl.factory, os, psutil
from memory_profiler import profile


@profile
def main():

    dat = 'data.dat'
    mod = 'model.mod'

    print('memory before doopl: ' + str(psutil.Process(os.getpid()).memory_info().rss / 100000000) + ' GB')

    with doopl.factory.create_opl_model(model=mod, data=dat) as opl:
        try:
            opl.mute()
            opl.run()
            opl.end() **# EDIT:** this is just to explicitly demonstrate with memory_profiler that opl.end() does not free all memory.
        except:
            'error'

    print('memory after doopl: ' + str(psutil.Process(os.getpid()).memory_info().rss / 100000000) + ' GB')

if __name__ == "__main__":
    main()

The data.dat file is empty and the model.mod file is as follows:

range X = 1..5;

dvar int+ y[X];

minimize sum(x in X) y[x];

subject to {
    forall (x in X) {
        y[x] <= 2;
    };
};

Is there some way to fully clear memory after solving a model with doopl?

orarne
  • 11
  • 2
  • As I did not find a solution so far, I used Python's [docplex](http://ibmdecisionoptimization.github.io/docplex-doc/) module instead. So far, I have not encountered any memory issues with this module. However, it required me to translate my optimization model from opl to docplex' syntax. – orarne Mar 11 '21 at 16:20

1 Answers1

0

How can your code even work ?

opl.end() sets some internals to None. If you do:

with create_opl_model(model=mod) as opl:
   opl.run()
   opl.end()

opl.end() is actually called twice: once as opl.end() then once when the context manager exits, resulting in an exception.

Please do not call opl.end() if you are using it as a context manager.

This is unless you have a very old version of doopl (>2 years). If so, please upgrade..

Now, in opl.end(), I can tell you that the C++ objects are correctly freed. I'm not aware of any memory leak issues here (but a memory leak in OPL must be demonstrated using c++, not a garbage collected language).

As far as I know, memory_profiler is based on process size using psutil. There is no guaranty that when you release some memory, the process size decreases (python might have release the memory, but the memory allocators might not have returned the memory to the system).

  • Thanks, @Viu-Long Kong for the reply. You are right that opl.end() does not make sense here. I included it to demonstrate that doopl's default end() procedure does not fully clear memory blocked by doopl calls. I guess python is somehow existing the context manager at the end()-call and that might be why the program is still running. Anyhow, it is good to know that C++ frees memory correctly. So I think the issue is then with Python's doopl module. I don't think it is an issue of memory_profiler as my system notably slows down with more iterations (so the memory is not returned to the system). – orarne Mar 11 '21 at 16:12
  • I suspect that because python allocats some memory, then c++ allocates some, when c++ releases its memory, it gets interleaved and lead to memory fragmentation. Please consider calling opl in a ProcessPoolExecutor and just serialize your solutions are simple python objects so that youcan return them from executions. – Viu-Long Kong Mar 15 '21 at 08:15
  • Thanks! It sounds promising. May I ask you to provide a small example of how to apply ProcessPoolExecutor in this case? I have not used it so far. I tried to wrap 'with concurrent.futures.ProcessPoolExecutor() as executor:' around my main()-function and use 'executor.submit(opl.run())' instead, but this does not seem to resolve the issue. – orarne Mar 16 '21 at 09:38