2

I noticed that my queries are running faster on my local machine compare to my server because on both machines only one core of the CPU is being used. Is there a way to enable multi-threading so I can use 12 (or all 24 cores) instead of just one?

I didn't find anything in the documentation to set this up but saw that other graph databases do support it. If it is supported by default, what could cause it to only use a single core?

vassil_momtchev
  • 1,173
  • 5
  • 11

1 Answers1

1

GraphDB by default will load all available CPU cores unless limited by the license type. The Free Edition has a limitation up to 2 concurrent read operations. However, I suspect that what you ask for is how to enable the query parallelism (decompose the query into smaller tasks and execute them in parallel).

  • Write operations in GDB SE/EE will always be split into multiple parallel tasks, so you will benefit from the multiple cores. GraphDB Free is limited to a single core due to commercial reasons.
  • Read operations are always executed on a single thread because in the general case the queries run faster. In some specific scenarios like heavy aggregates over large collections parallelizing the query execution may have substantial benefit, but this is currently not supported.

So to sum up having multiple cores will help you only handle more concurrent queries, but not process them faster. This design limitation may change in the upcoming versions.

vassil_momtchev
  • 1,173
  • 5
  • 11
  • Thanks for the clerification, but then i do not get how databases that are way bigger than mine handle normal queries. – Niklas Wilke Oct 23 '18 at 13:20
  • Query parallelism is always risky and often leads to contention in the database, which makes the database query actually way slower. If you want to optimize the query speed please use the query plan and the included tips: http://graphdb.ontotext.com/documentation/free/explain-plan.html – vassil_momtchev Oct 23 '18 at 14:25
  • Thanks so much! 0,3s for 1 query. – Niklas Wilke Oct 24 '18 at 14:12