My question is as title, we got memory leak on pypy process and process will down when out of memory, only on production site. Our simplified environment as below:
- OS: Centos 6
- pypy-2.3.1
< Tried Solutions >
objgraph is seems the only profiling library we can use in this env, and only with its partial function of printing all current objects in memory instead of any further info such as references (.getrefcount not implemented). It turns out we can only see lots of "int", "str", "list" objects seems leaking rather than knowing who are using them or whom they are using. :(
"pmap" produced data only shows memory growing in a [anon] block.
periodically gc => not helping concluded it's a REAL memory leak
< Our constraint >
- it's hard to change production python runtime since it might affect to our users
- we cannot reproduce on other environment
Please advise if there's other tool/methodologies to attack this problem, thanks a lot in advance :)