0

I'm using solr 5.4, i use it to save data that a web application process and then it saved to a several solr cores, this data is replicated to another solr in a different server.

My problem is that sometimes this error happens on the solr with the master cores when my web application tries to save the processed data. I already detect part of the problem, the application tries to save the data on the same but from different threads, for a temporal solution it was to delay the data saving, but eventualy the write.lock error will happen again.

I have read the documentation but i'm still lost at what should be the best on how to configure the cores.

Right now the cores have the default configuration.

Community
  • 1
  • 1
Progs
  • 1,059
  • 7
  • 27
  • 63
  • Have you tried using the simple (if its currently the nativefslock, the default)? https://cwiki.apache.org/confluence/display/solr/IndexConfig+in+SolrConfig#IndexConfiginSolrConfig-IndexLocks – MatsLindh Oct 14 '16 at 16:44
  • @MatsLindh Tried that, it gives this error: `CORE: org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: Index dir '/var/solr/data/CORE/data/index/' of core 'CORE' is already locked. The most likely cause is another Solr server (or another solr core in this server) also configured to use this directory; other possible causes may be specific to lockType: simple ` – Progs Oct 14 '16 at 16:48
  • Did you clean up the existing lock file first, or does this happen while the Solr server is running? You're not running multiple Solr instances on the same server? – MatsLindh Oct 14 '16 at 16:51
  • @MatsLindh yes i cleaned the lock file, no i'm not running multiple instances, the recent error i posted it happens when i changed the lock type. – Progs Oct 14 '16 at 16:53

0 Answers0