17

I am trying to build a NaiveBayes classifier with Spark's MLLib which takes as input a set of documents.

I'd like to put some things as features (i.e. authors, explicit tags, implicit keywords, category), but looking at the documentation it seems that a LabeledPoint contains only doubles, i.e it looks like LabeledPoint[Double, List[Pair[Double,Double]].

Instead what I have as output from the rest of my code would be something like LabeledPoint[Double, List[Pair[String,Double]].

I could make up my own conversion, but it seems odd. How am I supposed to handle this using MLLib?

I believe the answer is in the HashingTF class (i.e. hashing features) but I don't understand how that works, it appears that it takes some sort of capacity value, but my list of keywords and topics is effectively unbounded (or better, unknown at the beginning).

zero323
  • 322,348
  • 103
  • 959
  • 935
riffraff
  • 2,429
  • 1
  • 23
  • 32

1 Answers1

10

HashingTF uses the hashing trick to map a potentially unbounded number of features to a vector of bounded size. There is the possibility of feature collisions but this can be made smaller by choosing a larger number of features in the constructor.

In order to create features based on not only the content of a feature but also some metadata (e.g. having a tag of 'cats' as opposed to having the word 'cats' in the document) you could feed the HashingTF class something like 'tag:cats' so that a tag with a word would hash to a different slot than just the word.

If you've created feature count vectors using HashingTF you can use them to create bag of words features by setting any counts above zero to 1. You can also create TF-IDF vectors using the IDF class like so:

val tfIdf = new IDF().fit(featureCounts).transform(featureCounts)

In your case it looks like you've already computed the counts of words per document. This won't work with the HashingTF class since it's designed to do the counting for you.

This paper has some arguments about why feature collisions aren't that much of a problem in language applications. The essential reasons are that most words are uncommon (due to properties of languages) and that collisions are independent of word frequencies (due to hashing properties) so that it's unlikely that words that are common enough to help with one's models will both hash to the same slot.

mrmcgreg
  • 2,754
  • 1
  • 23
  • 26
  • thanks, just one extra clarification: if I understand correctly, `numFeatures` in `HashingTF` is basically used as the `mod` value used to bound the number of features to a given maximum? If so, shouldn't it just be `Double.MAX_VALUE` ? Or is the idea to use it so that i.e. it can restrict different features to given ranges and limit cross-collisions? (i.e. put some kind of features in 1..N and some other in N..2N, you'd have collisions among the same kind but not cross-kind) – riffraff Dec 16 '14 at 09:21
  • Yes, the computation looks like `features[hash(feature) % numFeatures] += 1`. The vectors that are created are usually used as input to some model so using `Double.MAX_VALUE` would imply a gigantic model. One of the main motivations of the hashing trick is memory reduction. You certainly could create features in the way you are suggesting but I'm not sure how to evaluate the benefits of such an approach. – mrmcgreg Dec 16 '14 at 13:54
  • ah of course, I was thinking of sparse vectors so didn't consider the array size. Thanks for your help! – riffraff Dec 17 '14 at 08:34