-2

I have recently started working with Spark's Dataset API and I am trying out a few examples. The following is one such example which fails with AnalysisException.

case class Fruits(name: String, quantity: Int)

val source = Array(("mango", 1), ("Guava", 2), ("mango", 2), ("guava", 2))
val sourceDS = spark.createDataset(source).as[Fruits]
// or val sourceDS = spark.sparkContext.parallelize(source).toDS().as[Fruits]
val resultDS = sourceDS.filter(_.name == "mango").filter(_.quantity > 1)

When executing the above code, I get:

19/06/02 18:04:42 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
19/06/02 18:04:42 INFO CodeGenerator: Code generated in 405.026891 ms
Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot resolve '`name`' given input columns: [_1, _2];
    at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$$nestedInanonfun$checkAnalysis$1$2.applyOrElse(CheckAnalysis.scala:110)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$$nestedInanonfun$checkAnalysis$1$2.applyOrElse(CheckAnalysis.scala:107)
    at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformUp$2(TreeNode.scala:278)
    at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:278)
    at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformUp$1(TreeNode.scala:275)
    at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:326)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:324)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:275)
    at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformUp$1(TreeNode.scala:275)
    at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:326)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:324)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:275)
    at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformUp$1(TreeNode.scala:275)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapChild$2(TreeNode.scala:295)
    at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$4(TreeNode.scala:354)
    at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at scala.collection.TraversableLike.map(TraversableLike.scala:237)
    at scala.collection.TraversableLike.map$(TraversableLike.scala:230)
    at scala.collection.immutable.List.map(List.scala:298)
    at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:354)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:324)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:275)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformExpressionsUp$1(QueryPlan.scala:93)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$1(QueryPlan.scala:105)
    at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:105)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.recursiveTransform$1(QueryPlan.scala:116)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$4(QueryPlan.scala:126)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.mapExpressions(QueryPlan.scala:126)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsUp(QueryPlan.scala:93)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$1(CheckAnalysis.scala:107)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$1$adapted(CheckAnalysis.scala:85)
    at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis(CheckAnalysis.scala:85)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis$(CheckAnalysis.scala:82)
    at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:95)
    at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.resolveAndBind(ExpressionEncoder.scala:258)
    at org.apache.spark.sql.Dataset.deserializer$lzycompute(Dataset.scala:214)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$deserializer(Dataset.scala:213)
    at org.apache.spark.sql.Dataset$.apply(Dataset.scala:72)
    at org.apache.spark.sql.Dataset.as(Dataset.scala:431)
    at SocketStreamWordcountApp$.main(SocketStreamWordcountApp.scala:20)
    at SocketStreamWordcountApp.main(SocketStreamWordcountApp.scala)
19/06/02 18:04:43 INFO SparkContext: Invoking stop() from shutdown hook

I thought that when we try to create a new Dataset or coverting RDD to Dataset using as[T], it would work. Is it not the case?

Just for experimentation, I tried creating a Dataframe and convert the Dataframe to Dataset like below but I end up with the same error.

val sourceDS = spark.sparkContext.parallelize(source).toDF().as[Fruits]
// or val sourceDS = spark.createDataFrame(source).as[Fruits]

Any help would be appreciated.

Sivaprasanna Sethuraman
  • 4,014
  • 5
  • 31
  • 60

3 Answers3

0

Column names of the input DataFrame have to match names of the fields of the case class. So you either need intermediate Dataset[Row]:

val sourceDS = spark.createDataset(source).toDF("name", "quantity").as[Fruits]

or go with one all the way.

Of course reasonable solution would be to start with Fruits from the beginning.

val source = Array(Fruits("mango", 1), Fruits("Guava", 2), Fruits("mango", 2), Fruits("guava", 2))
0

Starting from spark 2.3 the column name of the dataframe should match the name of the case class parameters. While with previous versions (2.1.1) the only constraint was the same number of columns/parameters. You can create a sequence of Fruits instead of tuples in this way:

case class Fruits(name: String, quantity: Int)

val source = Array(Fruits("mango", 1), Fruits("Guava", 2), Fruits("mango", 2), Fruits("guava", 2))
val sourceDS = spark.createDataset(source)
val resultDS = sourceDS.filter(_.name == "mango").filter(_.quantity
Thanas
  • 11
  • 5
0

I think that the answer by @user11589880 will work, but I have an alternative for you to consider:

val sourceDS = Seq(Fruit("Mango", 1), Fruit("Guava", 2)).toDF

The type of sourceDS would be of Dataset[Fruit]

Elior Malul
  • 683
  • 6
  • 8