Is there a way (without an extra cost like proposed here) for the column to remain nullable=False
after aggregating a non-nullable column?
I guess, in theory, it is impossible to have a null value in the agg column if there is no null value in the original column (and it is the case because the column is non-nullable), so why it does not remain non-nullable? Can we force it?
Here is the demo, the complete use case is to join and remain non-nullable again:
from pyspark.sql import functions as F
from pyspark.sql import types as T
schema = T.StructType(
[
T.StructField("left_id", T.StringType(), False),
T.StructField("right_id", T.StringType(), False),
T.StructField("count", T.LongType(), False),
]
)
coevents = spark.createDataFrame([
("a", "b", 2),
("a", "c", 3),
("b", "a", 6),
],
schema,
)
left_sum = coevents.groupBy("left_id").agg(F.sum("count").alias("left_total_count"))
left_sum.printSchema()
left_sum.show()
coevents = coevents.join(left_sum, on="left_id", how="inner") # inner cannot introduce null values
coevents.printSchema()
coevents.show()
Output:
root
|-- left_id: string (nullable = false)
|-- left_total_count: long (nullable = true)
+-------+----------------+
|left_id|left_total_count|
+-------+----------------+
| b| 6|
| a| 5|
+-------+----------------+
root
|-- left_id: string (nullable = false)
|-- right_id: string (nullable = false)
|-- count: long (nullable = false)
|-- left_total_count: long (nullable = true)
+-------+--------+-----+----------------+
|left_id|right_id|count|left_total_count|
+-------+--------+-----+----------------+
| b| a| 6| 6|
| a| b| 2| 5|
| a| c| 3| 5|
+-------+--------+-----+----------------+