1

I have installed Neo4J v3.3.0 (community edition) in an Ubuntu 16.04 virtual machine (Hyper-V) with 8GB and 4 cores.

I have a very small graph (30 nodes) and it is used just for reading (about 1 hit every 3 seconds), it seldomly gets written to. We want to expand the graph a lot more but every three days (sometimes less) our server crashes because Java is taking more than 2GB and top showed 300% CPU usage.

To me this makes no sense at all, could you please let me know how to configure Java or Neo4J in order to prevent this?

Thanks

I have the following configuration in my /etc/neo4j/neo4j.conf file:

dbms.query_cache_size=5000
dbms.threads.worker_count=4
dbms.memory.heap.initial_size=2g
dbms.memory.heap.max_size=2g

dbms.memory.pagecache.size=2g

The log files show the following error when this happens:

ERROR [o.n.b.v.r.c.RunnableBoltWorker] Worker for session 'ecfe4a7f-1714-4ba3-9e98-a692bf153b45' crashed. Java heap space java.lang.OutOfMemoryError: Java heap space

There are also these suspicious messages (which there are a lot of):

WARN [o.n.k.i.c.MonitorGc] GC Monitor: Application threads blocked for 4680ms.

ERROR [o.n.b.v.t.BoltMessagingProtocolV1Handler] Failed to write response to driver Unable to write to the closed output channel org.neo4j.bolt.v1.packstream.PackOutputClosedException: Unable to write to the closed output channel

WARN [io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception. syscall:read(..) failed: Connection reset by peer

New Information

I did an:

netstat -an | grep ESTABLISHED

I had a lot of open connections. We are using the following javascript driver in nodeJS (https://github.com/neo4j/neo4j-javascript-driver). Will check if we are not properly checking connections.

It seems I am closing correctly all connections properly with:

session.close();
driver.close();

The connections still remain open until I exit the application.

Final Comments

There was a place in my code where I was not closing connections.

https://github.com/neo4j/neo4j-javascript-driver/issues/275

supercoco
  • 512
  • 2
  • 7
  • 25

1 Answers1

2

I can say, that Neo4j works fine for me on 8GB of RAM with 10mln nodes and 30mln relations without special tuning.

top shows 300% CPU usage probably when garbage collection is done. So I vote for increasing heap size

/etc/neo4j/neo4j.conf

parameter

dbms.memory.heap.max_size=3g

On larger databases high CPU consumption mean absence of indexes.

To list indexes:

CALL db.indexes();

to create one:

CREATE INDEX ON :Label(prop_name);

If you get OOM errors (see dmesg) and java gets killed by linux (not crashes), try to install server OS, like CentOS without GUI et c, to free some memory. 8GB is more than enough for Neo4j with 8-10GB size database.

Oleg Gritsak
  • 548
  • 7
  • 26
  • I updated my question with my current neo4j.conf, I already have heap size equal to 2g. It's conforting to know you have a lot more nodes and Neo4J runs fine. I am still confused why it consumes so many resources in my configuration with such a small graph. – supercoco Nov 21 '17 at 13:03
  • You haven't said if JVM gets killed by OOM-killer or it crashes itself after failing to free internal JVM memory. That's crucial. Also post your dbms.memory.pagecache.size setting it is by default set for half of total RAM. – Oleg Gritsak Nov 22 '17 at 03:26
  • Also check your queries. As far as I know neo4j doesn't have timeout setting for long queries, so after 3 days it might be executing hundreds simultaneously. – Oleg Gritsak Nov 22 '17 at 03:34
  • How can I check how many queries arebeing executed simultaneously? Our queries are very simple shoudln´t be long runnign queries. – supercoco Nov 23 '17 at 13:21