3

I'm using GraphDb Free 8.6.1 in research project, I'm running it with default configuration on linux server having 4GB memory in total.

Currently, we execute quite many CRUD operations in tripplestore.

GraphDB throwed exception in console:

java.lang.OutOfMemoryError: Java heap space
-XX:OnOutOfMemoryError="kill -9 %p"
Executing /bin/sh -c "kill -9 1411"...

Looking into process, GraphDB runs with parameter XX:MaxDirectMemorySize=128G

I was not able to changed, even with ./graph -Xmx3g, process is still running with XX:MaxDirectMemorySize=128G.

I've tried to configure ./grapdh parameter, setting the GDB_HEAP_SIZE=3072m, now process runs with additional -Xms3072m -Xmx3072m parameters, but remains XX:MaxDirectMemorySize=128G.

After update to GDB_HEAP_SIZE=3072m, repository went down again without .hprof file, no exception, nothing suspicious in logs. The following line was flushed into console: Java HotSpot(TM) 64-Bit Server VM warning:

Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f5b4b6d0000, 65536, 1) failed; error='Cannot allocate memory' (errno=12)

Please, can you help me to properly configure GraphDB tripplestore to get rid of the Heap Space exceptions?

Thank you.

1 Answers1

2

By default, the value of the -XX:MaxDirectMemorySize (off heap memory) parameter in the JVM is equal to the -XMx (on heap memory). For very large repositories the size of the off heap memory may become insufficient so the GraphDB developers made this parameter 128GB or unlimited.

I suspect that your actual issue is actually allocating too much on heap memory, which leaves no space for the off heap in the RAM. When the database tries to allocate off heap RAM you hit this low OS-level error 'Cannot allocate memory'.

You have two options in solving this problem:

  • Increase the RAM of the server to 8GB and keep the same configuration - this would allow the 8 GB RAM to be distributed: 2GB (OS) + 3GB (on heap) + 3GB (off heap)
  • Decrease the -Xmx value to 2GB so the 4GB RAM will be distributed: 1GB (OS) + 2GB (on heap) + 1GB (off heap)

To get a good approximation how much RAM GraphDB needs please check the hardware sizing page:

http://graphdb.ontotext.com/documentation/8.6/free/requirements.html

vassil_momtchev
  • 1,173
  • 5
  • 11
  • Since the error is heap space related, a better approach is to reduce the amount used by the page cache component. By default it is set to use up to 50% of the heap so a value in the range of 256k-512k should relese the rest to be used for other structures, operations and request handling. See http://graphdb.ontotext.com/documentation/standard/configuring-a-repository.html#single-global-page-cache – Damyan Ognyanov Oct 04 '18 at 05:29
  • Thank you very much for recommendations. Setting GDB_HEAP_SIZE=2048m did the trick. We've performed the set of heavy load testing, repository resisted with full health. Anyway, i was aware of requirements and configuration documentation you've sent. But in reality, for non-insider of technology, it is very hard to find the proper equilibrium. So it is worth to ask instead of infinite experiments with parameters. – Peter Kostelnik Oct 04 '18 at 08:08