I have a Python script like this:
import module
globalvariable a
def read(key):
x = funcA(a, key)
ret = funcB(x)
if ret == True:
return "Success"
else:
return "Failure"
I am calling this function from my C++ program repeatedly from time to time via the Python-C-API. I noticed that my C++ process memory kept increasing each time I called this function.
After a lot of investigation by:
- Running Valgrind
- Checking all the Py_DECREFs of all the variables/PyObjects in my C++ code
- Ensuring my code is not leaking memory elsewhere
- Replacing my raw Python-C-API with boost python and checking again
I have found out that the C++ memory is increasing because of the Python script itself and not the Python-C-API. I confirmed this by replacing the above script with a dummy script(that has no variables and doesn't do anything) and calling the same function repeatedly. The process memory was constant and did not increase this time.
How do I force the Python Garbage collector to free all the variables after each function call? I don't know if/when the Python GC will free memory. I noticed some suggestions in other stackoverflow answers that forcing the Garbage collector to collect is a bad thing to do.
How do I ensure that no additional memory is getting used up by my c++ process each time I call the python script? Is it as simple as calling a del() on all the local python variables that I declare? Or is there some other way?
I am also calling quite a few functions from the imported module and I don't know if that will affect the garbage collection as well.