0

We have used Solr cloud 4.3.1 with 2 shards and 2 replicas architect design. Replica runs on Cross data center environment. I have a issue with deletion of single Solr document within a collection based on unique id field, but the behavior regarding the delete operation for Solr is intermittent. After the deletion when i test and execute the Solr query to fetch the deleted record, sometime i get the deleted Solr document as a result ideally which should not be happen.

If anyone has idea, please help me to resolve the above problem.

Brian Tompsett - 汤莱恩
  • 5,753
  • 72
  • 57
  • 129

1 Answers1

0

Per Solr Update Wiki :

"commit" and "optimize"

A commit operation makes index changes visible to new search requests. A hard commit also calls fsync on the index files to ensure they have been flushed to stable storage and no data loss will result from a power failure.

A soft commit is much faster since it only makes index changes visible and does not fsync index files or write a new index descriptor. If the JVM crashes or there is a loss of power, changes that occurred after the last hard commit will be lost. Search collections that have near-real-time requirements (that want index changes to be quickly visible to searches) will want to soft commit often but hard commit less frequently.

An optimize is like a hard commit except that it forces all of the index segments to be merged into a single segment first. Depending on the use cases, this operation should be performed infrequently (like nightly), if at all, since it is very expensive and involves reading and re-writing the entire index. Segments are normally merged over time anyway (as determined by the merge policy), and optimize just forces these merges to occur immediately.

This means that any index changes you make will not be visible until after a commit. An easy way to do this in the UI is to just click the reload button.

Community
  • 1
  • 1
nick_v1
  • 1,654
  • 1
  • 18
  • 29