0
public class EmployeeBean implements Serializable {

    private Long id;

    private String name;

    private Long salary;

    private Integer age;

    // getters and setters

}

Relevant spark code:

SparkSession spark = SparkSession.builder().master("local[2]").appName("play-with-spark").getOrCreate();
List<EmployeeBean> employees1 = populateEmployees(1, 10);

Dataset<EmployeeBean> ds1 = spark.createDataset(employees1, Encoders.bean(EmployeeBean.class));
ds1.show();
ds1.printSchema();

Dataset<Row> ds2 = ds1.where("age is null").withColumn("is_age_null", lit(true));
Dataset<Row> ds3 = ds1.where("age is not null").withColumn("is_age_null", lit(false));

Dataset<Row> ds4 = ds2.union(ds3);
ds4.show();

Relevant Output:

ds1

+----+---+----+------+
| age| id|name|salary|
+----+---+----+------+
|null|  1|dev1| 11000|
|   2|  2|dev2| 12000|
|null|  3|dev3| 13000|
|   4|  4|dev4| 14000|
|null|  5|dev5| 15000|
+----+---+----+------+

ds4

+----+---+----+------+-----------+
| age| id|name|salary|is_age_null|
+----+---+----+------+-----------+
|null|  1|dev1| 11000|       true|
|null|  3|dev3| 13000|       true|
|null|  5|dev5| 15000|       true|
|   2|  2|dev2| 12000|      false|
|   4|  4|dev4| 14000|      false|
+----+---+----+------+-----------+

Is there any better solution to add this column in the dataset rather than creating two datasets and performing union?

Dev
  • 13,492
  • 19
  • 81
  • 174

1 Answers1

0

Same can be done using when otherwise and withColumn.

ds1.withColumn("is_age_null" , when(col("age") === "null", lit(true)).otherWise(lit(false))).show()

This will give the same result as the ds4.

vindev
  • 2,240
  • 2
  • 13
  • 20