0

As i just know, simhash and minhash are available on this task. But all those algorithms have to traverse the whole text database which will be quite aweful. Is there any optimization or other algorithm that can accelebrate the task? All I come up with is slicing the text database into several parts and getting pairwise similarity parallelly. My text database has about 1 billion records.

Leo Zhao
  • 77
  • 1
  • 12
  • i use mongodb to store texts. Is there any thing about mongodb can help to reduce the load of traverse the whole database like prestoring the hashcode of texts which don't not help a lot after my trial. – Leo Zhao Nov 18 '15 at 16:16
  • You'll have to traverse the database at least once to do anything useful. – Juan Lopes Dec 08 '15 at 00:36
  • @Juan Lopes, Thank you, i have migrated the program to spark for distributed computation and it works well. – Leo Zhao Dec 08 '15 at 10:15
  • _But all those algorithms have to traverse the whole text database which will be quite awful._ Actually with minhash you first just compare with a certain part of a document, and you do it using hashes, so you actually compare with portions of each document, and don't compare with whole of the document for all the documents in the database. – David Batista May 20 '16 at 11:55
  • Tokenize first 100-2000 words from documents and store token numbers to database. With tokens you can quickly calculate simhash for your documents but tokenization also provides other nice tools like fast keyword or phrase search. – itix Jun 26 '17 at 07:55

1 Answers1

2

You must traverse the entire database once (1 billion records).

The benefit of minhash and simhash is that you don't have to individually compare every possible pair to see if they are similar (roughly 500 quadrillion possible pairs).

Splitting the database into multiple parts is not going to help; you will simply miss some similarities. Splitting is only sensible if the records fall naturally into groups that you know cannot have any similarities between them (for instance, if you have two very distinct types of record that are never similar to each other, you can treat them separately for similarity detection).

Both simhash and minhash can benefit from distributed computing. Generating hashes can be distributed as much as you like. Storage of hashes can be split with map/reduce if you like, though for simhash you probably won't need this as it's compact enough to fit in a fairly standard machine's main memory.

Simhash can only find similarity pairs that are very closely similar, and it often needs a fair bit of tuning to work really well. If you want to find looser similarities, use one of the minhash variants, which are more forgiving. I recommend checking out superminhash, in conjunction with LSH. Superminhash is fast generating hashes, but possibly more importantly it achieves better precision, so fewer hashes need to be stored. LSH groups the hashes into bands so that you don't compare individual hashes; you compare an entire band at a time. Both these techniques mean fewer queries are needed to find individual shared hashes (or bands, in the latter case), and LSH in particular means fewer results will need to be processed for each individual query. This should give you substantial speedup.

Ben Whitmore
  • 857
  • 1
  • 6
  • 15