I have data which I need to handle using Pyspark dataframe even when it is corrupted. I tried using PERMISSIVE
but still I am getting error. I can read the same code if have some data in the account_id
The data I have where the account_id(integer) has no value:
{
"Name:"
"account_id":,
"phone_number":1234567890,
"transactions":[
{
"Spent":1000,
},
{
"spent":1100,
}
]
}
The code I tried:
df=spark.read.option("mode","PERMISSIVE").json("path\complex.json",multiLine=True)
df.show()
The error and warning I get:
pyspark.sql.utils.AnalysisException: Since Spark 2.3, the queries from raw JSON/CSV files are disallowed when the
referenced columns only include the internal corrupt record column
(named _corrupt_record by default). For example:
spark.read.schema(schema).json(file).filter($"_corrupt_record".isNotNull).count()
and spark.read.schema(schema).json(file).select("_corrupt_record").show().
Instead, you can cache or save the parsed results and then send the same query.
For example, val df = spark.read.schema(schema).json(file).cache() and then
df.filter($"_corrupt_record".isNotNull).count().;
How can I read corrupted data in Pyspark Dataframe?