I have been running a series of topic modeling experiments in Spark, varying the number of topics. So, given an RDD docsWithFeatures
, I'm doing something like this:
for (n_topics <- Range(65,301,5) ){
val s = n_topics.toString
val lda = new LDA().setK(n_topics).setMaxIterations(20) // .setAlpha(), .setBeta()
val ldaModel = lda.run(docsWithFeatures)
// now do some eval, save results to file, etc...
This has been working great, but I also want to compare results if I first normalize my data with TF-IDF. Now, to the best of my knowledge, LDA strictly expects a bag-of-words format where term frequencies are integer values. But in principal (and I've seen plenty of examples of this), the math works out fine if we first convert integer term frequencies to float TF-IDF values. My approach at the moment to do this is the following (again given my docsWithFeatures
rdd):
val index_reset = docsWithFeatures.map(_._2).cache()
val idf = new IDF().fit(index_reset)
val tfidf = idf.transform(index_reset).zipWithIndex.map(x => (x._2,x._1))
I can then run the same code as in teh first block, substituting tfidf
for docsWithFeatures
. This works without any crashes, but my main question here is whether this is OK to do. That is, I want to make sure Spark isn't doing anything funky under the hood, like converting the float values coming out of the TFIDF to integers or something.