10

I am relatively new to Spark and Scala.

I am starting with the following dataframe (single column made out of a dense Vector of Doubles):

scala> val scaledDataOnly_pruned = scaledDataOnly.select("features")
scaledDataOnly_pruned: org.apache.spark.sql.DataFrame = [features: vector]

scala> scaledDataOnly_pruned.show(5)
+--------------------+
|            features|
+--------------------+
|[-0.0948337274182...|
|[-0.0948337274182...|
|[-0.0948337274182...|
|[-0.0948337274182...|
|[-0.0948337274182...|
+--------------------+

A straight conversion to RDD yields an instance of org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] :

scala> val scaledDataOnly_rdd = scaledDataOnly_pruned.rdd
scaledDataOnly_rdd: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = MapPartitionsRDD[32] at rdd at <console>:66

Does anyone know how to convert this DF to an instance of org.apache.spark.rdd.RDD[org.apache.spark.mllib.linalg.Vector] instead? My various attempts have been unsuccessful so far.

Thank you in advance for any pointers!

zero323
  • 322,348
  • 103
  • 959
  • 935
Yeye
  • 171
  • 1
  • 1
  • 8

3 Answers3

7

Just found out:

val scaledDataOnly_rdd = scaledDataOnly_pruned.map{x:Row => x.getAs[Vector](0)}
Yeye
  • 171
  • 1
  • 1
  • 8
5

EDIT: use more sophisticated way to interpret fields in Row.

This is worked for me

val featureVectors = features.map(row => {
  Vectors.dense(row.toSeq.toArray.map({
    case s: String => s.toDouble
    case l: Long => l.toDouble
    case _ => 0.0
  }))
})

features is a DataFrame of spark SQL.

andrew
  • 123
  • 1
  • 6
1
import org.apache.spark.mllib.linalg.Vectors

scaledDataOnly
   .rdd
   .map{
      row => Vectors.dense(row.getAs[Seq[Double]]("features").toArray)
     }
Santoshi M
  • 123
  • 5