I am developing an application that performs data quality checks over input files and captures counts based on reported DQ failures in the data. Does the approach I use make sense or would recommend better way to do this?
I am trying to write an application in Python that would capture DQ errors in the data and gather counts. I could have used Pandas, Numpy for this, however, since the data quantity is huge ~100 GB I decided to do it through Spark. This is my 3rd application in Python so while i can write code in it, I am out of my depth if this really is the optimum way to do it.
So to summarize, I am reading multiple CSV files and creating a single Parquet file over it then creating a temp view that i could query to find DQ issues. I then capture the results of the query in a variable, and then write it to a list. This list is later used to write a CSV that becomes input for a dashboard report. Code below.
# Importing required libraries
import time,datetime
from pyspark.sql import SparkSession
# Initiating Spark Session
spark = SparkSession.builder.appName("DQ-checks").getOrCreate()
# Initializing Variables
time1 = datetime.datetime.now()
src_file_01 = r'\All kinds of data files\craigslistVehicles.csv'
target_filename = r'\All kinds of data files\craigslistVehicles.parquet'
# Read the CSV file through Spark
df = spark.read.csv(src_file_01, header="true", inferSchema="true")
# Update code to make it flexible enough to read multiple files
# Write the contents of the CSV file into a Parquet file
df.write.format("parquet").save(target_filename, mode="Overwrite")
print("created a parquet file")
# Create a temporary view over the Parquet file to query data
df2 = spark.read.parquet(target_filename)
df2.createOrReplaceTempView("craigslistVehicles")
# Create a column list from the header of the Spark View
column_list = df2.columns
print(column_list)
# Capturing time before start of the query for benchmarking
time2 = datetime.datetime.now()
result_store = []
# Iterate through all the columns and capture null counts for each column
rule_type = 'Null Check'
results={}
for column_names in column_list:
query = "Select count(*) from craigslistVehicles where {} is null".format(column_names)
# print(query)
df3 = spark.sql(query).collect()
for i in df3:
results.update(i.asDict())
res_in_num=results['count(1)']
result_store=[rule_type,column_names,res_in_num]
print (result_store)
# Next Steps - Update code to add more data quality checks based on requirement.
# Next Steps - Insert results of the queries into a spark table that can be used as a log and becomes an input for a dashboard report.
# Capturing time after end of the query for benchmarking
time3 = datetime.datetime.now()
print("Query time is.{}",time3-time2)
print("Total Job run time is.{}",time3-time1)
# Spark Session Stop
spark.stop()
Currently, this works. I am able to process a 1.1 GB file in under a minute.
My questions are -
Does this design make sense? How would you do it if you had to do this? Is there something obvious I can change to make the code more clean?