2

I'm trying to perform LDA on Wikipedia XML dump. After getting an RDD of raw text, I am creating a dataframe and transforming it through Tokenizer, StopWords and CountVectorizer pipelines. I intend to pass the RDD of Vectors ouput from CountVectorizer to OnlineLDA in MLLib. Here's my code:

 // Configure an ML pipeline
 RegexTokenizer tokenizer = new RegexTokenizer()
   .setInputCol("text")
   .setOutputCol("words");

 StopWordsRemover remover = new StopWordsRemover()
          .setInputCol("words")
          .setOutputCol("filtered");

 CountVectorizer cv = new CountVectorizer()
          .setVocabSize(vocabSize)
          .setInputCol("filtered")
          .setOutputCol("features");

 Pipeline pipeline = new Pipeline()
          .setStages(new PipelineStage[] {tokenizer, remover, cv});

// Fit the pipeline to train documents.
 PipelineModel model = pipeline.fit(fileDF);

 JavaRDD<Vector> countVectors = model.transform(fileDF)
          .select("features").toJavaRDD()
          .map(new Function<Row, Vector>() {
            public Vector call(Row row) throws Exception {
                Object[] arr = row.getList(0).toArray();

                double[] features = new double[arr.length];
                int i = 0;
                for(Object obj : arr){
                    features[i++] = (double)obj;
                }
                return Vectors.dense(features);
            }
          });

I'm getting the class cast exception because of the line

Object[] arr = row.getList(0).toArray();


Caused by: java.lang.ClassCastException: org.apache.spark.mllib.linalg.SparseVector cannot be cast to scala.collection.Seq
at org.apache.spark.sql.Row$class.getSeq(Row.scala:278)
at org.apache.spark.sql.catalyst.expressions.GenericRow.getSeq(rows.scala:192)
at org.apache.spark.sql.Row$class.getList(Row.scala:286)
at org.apache.spark.sql.catalyst.expressions.GenericRow.getList(rows.scala:192)
at xmlProcess.ParseXML$2.call(ParseXML.java:142)
at xmlProcess.ParseXML$2.call(ParseXML.java:1)

I found the Scala syntax to do this here but couldn't find any example to do it in Java. I tried row.getAs[Vector](0) but that's just Scala syntax. Any ways to do it in Java?

Community
  • 1
  • 1
Legolas
  • 113
  • 2
  • 11

2 Answers2

4

So I was able to do it with a simple cast to Vector. I don't know why I didn't try the simple things first!

         JavaRDD<Vector> countVectors = model.transform(fileDF)
              .select("features").toJavaRDD()
              .map(new Function<Row, Vector>() {
                public Vector call(Row row) throws Exception {
                    return (Vector)row.get(0);
                }
              });

Or with lambda expressions,

JavaRDD<Vector> countVectors = model.transform(fileDF)
                  .select("features")
                  .toJavaRDD()
                  .map((Function<Row, Vector>) row -> (Vector) row.get(0));

     
menuka
  • 61
  • 8
Legolas
  • 113
  • 2
  • 11
0

You don't need to covert the DataFrame/DataSet to a JavaRDDfor it to work with LDA. After few hours of fiddling I finally got the native rdd in Scala to work.

Relevant imports:

import org.apache.spark.ml.feature.{CountVectorizer, RegexTokenizer, StopWordsRemover}
import org.apache.spark.ml.linalg.{Vector => MLVector}
import org.apache.spark.mllib.clustering.{LDA, OnlineLDAOptimizer}
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.sql.{Row, SparkSession}

The snippet of the code follows the rest remains the same as this example:

val cvModel = new CountVectorizer()
        .setInputCol("filtered")
        .setOutputCol("features")
        .setVocabSize(vocabSize)
        .fit(filteredTokens)


val countVectors = cvModel
        .transform(filteredTokens)
        .select("docId","features")
        .rdd.map { case Row(docId: String, features: MLVector) => 
                   (docId.toLong, Vectors.fromML(features)) 
                 }
val mbf = {
    // add (1.0 / actualCorpusSize) to MiniBatchFraction be more robust on tiny datasets.
    val corpusSize = countVectors.count()
    2.0 / maxIterations + 1.0 / corpusSize
  }
  val lda = new LDA()
    .setOptimizer(new OnlineLDAOptimizer().setMiniBatchFraction(math.min(1.0, mbf)))
    .setK(numTopics)
    .setMaxIterations(2)
    .setDocConcentration(-1) // use default symmetric document-topic prior
    .setTopicConcentration(-1) // use default symmetric topic-word prior

  val startTime = System.nanoTime()
  val ldaModel = lda.run(countVectors)
  val elapsed = (System.nanoTime() - startTime) / 1e9

  /**
    * Print results.
    */
  // Print training time
  println(s"Finished training LDA model.  Summary:")
  println(s"Training time (sec)\t$elapsed")
  println(s"==========")

Thanks goes to the author of the code here.

AmirHd
  • 10,308
  • 11
  • 41
  • 60