20

I wrote the following code in both Scala & Python, however the DataFrame that is returned doesn't appear to apply the non-nullable fields in my schema that I am applying. italianVotes.csv is a csv file with '~' as a separator and four fields. I'm using Spark 2.1.0.

italianVotes.csv

2657~135~2~2013-11-22 00:00:00.0
2658~142~2~2013-11-22 00:00:00.0
2659~142~1~2013-11-22 00:00:00.0
2660~140~2~2013-11-22 00:00:00.0
2661~140~1~2013-11-22 00:00:00.0
2662~1354~2~2013-11-22 00:00:00.0
2663~1356~2~2013-11-22 00:00:00.0
2664~1353~2~2013-11-22 00:00:00.0
2665~1351~2~2013-11-22 00:00:00.0
2667~1357~2~2013-11-22 00:00:00.0

Scala

import org.apache.spark.sql.types._
val schema =  StructType(
StructField("id", IntegerType, false) ::
StructField("postId", IntegerType, false) ::
StructField("voteType", IntegerType, true) ::
StructField("time", TimestampType, true) :: Nil)

val fileName = "italianVotes.csv"

val italianDF = spark.read.schema(schema).option("sep", "~").csv(fileName)

italianDF.printSchema()

// output
root
 |-- id: integer (nullable = true)
 |-- postId: integer (nullable = true)
 |-- voteType: integer (nullable = true)
 |-- time: timestamp (nullable = true)

Python

from pyspark.sql.types import *

schema = StructType([
    StructField("id", IntegerType(), False),
    StructField("postId", IntegerType(), False),
    StructField("voteType", IntegerType(), True),
    StructField("time", TimestampType(), True),
])

file_name = "italianVotes.csv"

italian_df = spark.read.csv(file_name, schema = schema, sep = "~")

# print schema
italian_df.printSchema()
root
 |-- id: integer (nullable = true)
 |-- postId: integer (nullable = true)
 |-- voteType: integer (nullable = true)
 |-- time: timestamp (nullable = true)

My main question is why are the first two fields nullable when I have set them to non-nullable in my schema?

zero323
  • 322,348
  • 103
  • 959
  • 935
cameres
  • 498
  • 2
  • 5
  • 15

1 Answers1

21

In general Spark Datasets either inherit nullable property from its parents, or infer based on the external data types.

You can argue if it is a good approach or not but ultimately it is sensible. If semantics of a data source doesn't support nullability constraints, then application of a schema cannot either. At the end of the day it is always better to assume that things can be null, than fail on the runtime if this the opposite assumption turns out to be incorrect.

zero323
  • 322,348
  • 103
  • 959
  • 935
  • hi how can we know If semantics of a data source doesn't support nullability constraints – Rajnish Kumar Apr 09 '18 at 08:21
  • @rajNishKuMar As a rule of thumb - if something is plain text format, which doesn't provide schema, it doesn't enforce any constraints. – zero323 Apr 09 '18 at 20:15
  • 1
    @zero323 does it mean if I read json and then do printSchema() I'll always get nullable = true for all fields, even if a field is never null as per the data ? My question: https://stackoverflow.com/questions/61425977/why-spark-outputs-nullable-true-when-schema-inference-left-to-spark-in-case/61426551?noredirect=1#comment108664547_61426551 – advocateofnone Apr 25 '20 at 15:51
  • 1
    Not sure this argument holds water since csv also doesn't have types, yet Spark allows coercion of types by specifying a schema, but not nullability by specifying a schema. IMHO it ought to just throw an exception if the data contains nulls where the schema mandates otherwise. – Jason Nov 09 '20 at 21:54