15

Looking at the select() function on the spark DataSet there are various generated function signatures:

(c1: TypedColumn[MyClass, U1],c2: TypedColumn[MyClass, U2] ....)

This seems to hint that I should be able to reference the members of MyClass directly and be type safe, but I'm not sure how...

ds.select("member") of course works .. seems like ds.select(_.member) might also work somehow?

Gynteniuxas
  • 7,035
  • 18
  • 38
  • 54
Jeremy
  • 682
  • 2
  • 8
  • 17

2 Answers2

21

In the Scala DSL for select, there are many ways to identify a Column:

  • From a symbol: 'name
  • From a string: $"name" or col(name)
  • From an expression: expr("nvl(name, 'unknown') as renamed")

To get a TypedColumn from Column you simply use myCol.as[T].

For example: ds.select(col("name").as[String])

Sim
  • 13,147
  • 9
  • 66
  • 95
  • 1
    this answer is correct, however be aware that this as[T] is not type safe so it could explode in RT if assuming the wrong type. – linehrr Dec 10 '18 at 21:14
  • 1
    Good point. For the most help from the compiler, you have to switch to Scala types entirely, e.g., `ds.as[T].map { t: T => ... }`. Note that there will be a data conversion cost as internally Spark uses raw binary data and not Scala types. – Sim Dec 11 '18 at 22:19
21

If you want the equivalent of ds.select(_.member) just use map:

case class MyClass(member: MyMember, foo: A, bar: B)
val ds: DataSet[MyClass] = ???
val members: DataSet[MyMember] = ds.map(_.member)

Edit: The argument for not using map.

A more performant way of doing the same would be through a projection, and not use map at all. You lose the compile-time type checking, but in exchange give the Catalyst query engine a chance to do something more optimized. As @Sim alludes to in his comment below, the primary optimization centers around not requiring whole contents of MyClass to be deserialized from Tungsten memory space into JVM heap memory--just to call the accessor--and then serialize the result of _.member back into Tungsten.

To make a more concrete example, let's redefine our data model like this:

  // Make sure these are not nested classes 
  // (i.e. in a top level compilation units).
  case class MyMember(something: Double)
  case class MyClass(member: MyMember, foo: Int, bar: String)

These need to be case classes so that SQLImplicits.newProductEncoder[T <: Product] can provide us with an implicit Encoder[MyClass], required by the Dataset[T] API.

Now we can make the example above more concrete:

  val ds: Dataset[MyClass] = Seq(MyClass(MyMember(1.0), 2, "three")).toDS()
  val membersMapped: Dataset[Double] = ds.map(_.member.something)

To see what's going on behind the scenes we use the explain() method:

membersMapped.explain()

== Physical Plan ==
*(1) SerializeFromObject [input[0, double, false] AS value#19]
+- *(1) MapElements <function1>, obj#18: double
   +- *(1) DeserializeToObject newInstance(class MyClass), obj#17: MyClass
      +- LocalTableScan [member#12, foo#13, bar#14]

This makes the serialization to/from Tungsten explicitly evident.

Let's get to the same value using a projection[^1]:

val ds2: Dataset[Double] = ds.select($"member.something".as[Double])
ds2.explain()

== Physical Plan ==
LocalTableScan [something#25]

That's it! A single step[^2]. No serialization other than the encoding of MyClass into the original Dataset.

[^1]: The reason the projection is defined as $"member.something" rather than $"value.member.something" has to do with Catalyst automatically projecting the members of a single column DataFrame.

[^2]: To be fair, the * next to the steps in the first physical plan indicate they will be implemented by a WholeStageCodegenExec whereby those steps become a single, on-the-fly compiled JVM function that has its own set of runtime optimizations applied to it. So in practice you'd have to empirically test the performance to really assess the benefits to each approach.

metasim
  • 4,793
  • 3
  • 46
  • 70
  • Note that there will be a data conversion cost as internally Spark uses raw binary data and not Scala types. – Sim Dec 11 '18 at 22:20
  • What is advantage of using Dataset in this case? Is it just trading-off type-safety over performance? I quite doesn't get when Dataset is going to be useful! – Aravind Yarram Apr 14 '20 at 16:46
  • 1
    Most of the time you will just want to use a Dataframe. Sometimes, for interoperability with other functions you may want to go into DataSet space to be able to call `map`, `flatMap`, etc. without creating a UDF. Or some other side-case. – metasim Apr 14 '20 at 19:22