4

I am continuously updating a neo4j graph through the REST API with concurrent requests. I open and close each transaction explicitly, am using the recommended garbage collection method (ConcurrentMarkSweep), my memory maps are big enough to store the entire graph in the cache, and yet still I'm seeing "Old Gen" memory creep up well beyond the size of the graph itself, reaching 8GB at around 4 million nodes and 15 million relationships. Has anyone experienced a similar problem? As I'm using the REST API, it's hard to figure out where the memory is leaking.

Other info: I am using cache_type=strong and a 16GB heap. I've added these flags:

wrapper.java.additional=-XX:MaxTenuringThreshold=15

wrapper.java.additional=-XX:SurvivorRatio=20

wrapper.java.additional=-XX:NewRatio=1

to discourage promotion into old memory, but I have the problem both with and without them.

Bruno Peres
  • 15,845
  • 5
  • 53
  • 89

0 Answers0