0

In my current project we have created a number of integration tests that use a Neo4j DB as part of our CI/CD pipeline. As the number of tests grow the pipeline slows down and one common suggestion for other databases is to mount your database in memory to avoid I/O. Now, I tried running our docker container with

    docker run -it --rm -p 7474:7474 -p 7687:7687 -p 8080:8080   \
    --mount type=tmpfs,destination=/data \
    --mount type=tmpfs,destination=/logs \
    --mount type=tmpfs,destination=/var/lib/neo4j/data \
    --mount type=tmpfs,destination=/var/lib/neo4j/logs \
    --mount type=tmpfs,destination=/var/lib/neo4j/metrics \

Note that /var/lib/neo4j/data seems to be a symlink to /data so I just mapped both. I also verified with df -h that they actually are mapped to tmpfs.

All looked good, so it was disappointing to see that there was no speed-up of our tests. Are there any official Neo4J resources on have anyone had any luck in doing this with Neo4J?

Simon Thordal
  • 893
  • 10
  • 28
  • Neo4j automatically caches your data in memory anyway, so that may explain the lack of speed up. Perhaps you should try to optimize your neo4j [memory configuration](https://neo4j.com/docs/operations-manual/current/performance/memory-configuration/) instead. – cybersam Jun 16 '23 at 20:10
  • In this case the load is write heavy ( each integration test sets up a DB ), so caching is not in play here. – Simon Thordal Jun 19 '23 at 06:16
  • Caching is always in play. [This blurb](https://community.neo4j.com/t/how-neo4j-promise-consistency-and-durability-features/24502/2) is pretty informative. A write transaction is first logged and cached, and only later flushed to the stores. So while there is still unavoidable IO to do the logging, it should be less intense than updating the stores immediately. – cybersam Jun 20 '23 at 18:25

0 Answers0