I have a small part in my code thats similar to this one (ofcourse with real matrices instead of the zero filled ones):
x = [rinterface.FloatSexpVector([0]*(1000**2)) for i in xrange(20)]
y = robjects.r('list')(x)
and it looks like its causing memory leaks.
When running the following code:
for i in xrange(10):
x = [rinterface.FloatSexpVector([0]*(1000**2)) for i in xrange(20)]
y = robjects.r('list')(x)
del x
del y
robjects.r('gc(verbose=TRUE)')
I get:
Error: cannot allocate vector of size 7.6 Mb
In addition: Warning messages:
1: Reached total allocation of 2047Mb: see help(memory.size)
2: Reached total allocation of 2047Mb: see help(memory.size)
3: Reached total allocation of 2047Mb: see help(memory.size)
4: Reached total allocation of 2047Mb: see help(memory.size)
Is this a bug or is there something else I should do? I've also tried making the variable named by putting them into robjects.globalenv and then rm()-ing them before the gc() but it doesnt seem to work.
I should mention that I'm running rpy 2.3dev on windows but this happens also on linux with rpy 2.2.6 (though since the linux runs 64 bit versions and not 32 bit like the windows machine does, the memory just grows and I dont get the 2047mb error)
EDIT: It seems like adding gc.collect() before the R gc() resolves the issue with the first code example, however this didnt solve my problem - digging deeper into my code I found that the line that causes the problem is with assigning a value into .names, similar to this:
x = [rinterface.FloatSexpVector([0]*(1000**2)) for i in xrange(20)]
y = robjects.r('list')(x)[0]
y.names = rinterface.StrSexpVector(['a']*len(y))
putting rinterface.NULL before cleaning doesn't help either. any suggestions?