20

My application does very frequent solr writes from multiple clients via REST. I'm using the autocommit feature by using the "commitWithin" attribute. LockObtainFailedException start appearing after couple of days of use. I'm having a hard time figuring out what the problem might be. Any help is appreciated. I'm using Solr 3.1 with tomcat 6

here is the error dump from solr


HTTP Status 500 - Lock obtain timed out:      NativeFSLock@/var/lib/solr/data/index/write.lock

org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/var/lib/solr/data/index/write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:84)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1097)
at org.apache.solr.update.SolrIndexWriter.<init>(SolrIndexWriter.java:83)
at org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHandler.java:102)
at org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHandler2.java:174)
at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:222)
at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:61)
at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:147)
at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:77)
at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:55)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1360)
at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:662)
</h1><HR size="1" noshade="noshade"><p><b>type</b> Status report</p><p><b>message</b> <u>Lock obtain timed out: NativeFSLock@/var/lib/solr/data/index/write.lock
Nands
  • 1,541
  • 2
  • 20
  • 33

3 Answers3

16

I increased the writeLockTimeout in solrconfig.xml to 20 seconds and it appears to be working fine now. Earlier it was set to 1 sec

Nands
  • 1,541
  • 2
  • 20
  • 33
  • I was getting the same error with a single client writing a single record, and increasing `writeLockTimeout` fixed it. I don't think it has anything to do with concurrency or "multiple simultaneous writes" which was the original title of this post. I think the index is just growing, and the amount of time it takes to write has increased. – Jonathan Tran Aug 18 '11 at 23:03
7

This usually happens because Solr terminated in a non-standard way, so its lock didn't get cleaned up. Was there a JVM crash / Solr crash before this happened?

Another cause is if you are trying to point multiple solr servers at the same location. See e.g. this question.

Community
  • 1
  • 1
Xodarap
  • 11,581
  • 11
  • 56
  • 94
  • thanks. I'll try to look into the JVM logs. I guess it requires some tuning. – Nands May 26 '11 at 08:26
  • How does one clean the locks up? – Trip Jul 02 '13 at 15:30
  • 1
    Solr uses file locking, so rename or delete the lock file (our lock files contains `write.lock`). That file was in the index files folder -- for us, `SolrHome\\data\index`. Rename or delete that file: our file `lucene-5d886598917ad7fbb03256c713a8aacb-write.lock` was renamed to `TEMP__lucene-5d886598917ad7fbb03256c713a8aacb-write_lock__TEMP`. After that, re-indexing ran without locking problems. Obviously, that's no replacement for (1) prevention (scheduling re-indexing), (2) maybe error handling (writer.close & writer.open?), or (3) timeout settings appropriate for you. – Doug_Ivison Nov 14 '13 at 19:18
2

It could also be a lack of memory. A lack of memory can cause locks, I'm seeing the same thing on my server.

From the apache docs: If your Solr instance doesn't have enough memory allocated to it, the Java virtual machine will sometimes throw a Java OutOfMemoryError. There is no danger of data corruption when this occurs, and Solr will attempt to recover gracefully. Any adds/deletes/commits in progress when the error was thrown are not likely to succeed, however. Other adverse effects may arise. For instance, if the SimpleFSLock locking mechanism is in use (as is the case in Solr 1.2), an ill-timed OutOfMemoryError can potentially cause Solr to lose its lock on the index. If this happens, further attempts to modify the index will result in

SEVERE: Exception during commit/optimize:java.io.IOException: Lock obtain timed out: SimpleFSLock@/tmp/lucene-5d12dd782520964674beb001c4877b36-write.lock

See http://wiki.apache.org/solr/SolrPerformanceFactors