0

I have data which I need to handle using Pyspark dataframe even when it is corrupted. I tried using PERMISSIVE but still I am getting error. I can read the same code if have some data in the account_id

The data I have where the account_id(integer) has no value:

   {
      "Name:"
      "account_id":,
      "phone_number":1234567890,
      "transactions":[
         {
            "Spent":1000,
         },
         {
            "spent":1100,
         }
      ]
   }

The code I tried:

df=spark.read.option("mode","PERMISSIVE").json("path\complex.json",multiLine=True)
df.show()

The error and warning I get:

pyspark.sql.utils.AnalysisException: Since Spark 2.3, the queries from raw JSON/CSV files are disallowed when the
referenced columns only include the internal corrupt record column
(named _corrupt_record by default). For example:
spark.read.schema(schema).json(file).filter($"_corrupt_record".isNotNull).count()
and spark.read.schema(schema).json(file).select("_corrupt_record").show().
Instead, you can cache or save the parsed results and then send the same query.
For example, val df = spark.read.schema(schema).json(file).cache() and then
df.filter($"_corrupt_record".isNotNull).count().;

How can I read corrupted data in Pyspark Dataframe?

mazaneicha
  • 8,794
  • 4
  • 33
  • 52
Tommy_SK
  • 87
  • 1
  • 10
  • please add snapshot of data and so that this scenario can be reproduced – Manish Jul 01 '20 at 19:14
  • Yes I have included my data now – Tommy_SK Jul 02 '20 at 04:32
  • need to have one json object per row https://stackoverflow.com/questions/57451719/since-spark-2-3-the-queries-from-raw-json-csv-files-are-disallowed-when-the-ref and https://stackoverflow.com/questions/35409539/corrupt-record-error-when-reading-a-json-file-into-spark – murtihash Jul 02 '20 at 04:37
  • But I am able to process this code if I have some value in the account_id data – Tommy_SK Jul 02 '20 at 06:29

0 Answers0