We have already DBF files in our project and we have to parse them and store them as parquet files. we are reading the files using dbfread module.
smaller files are getting read easily.however,there are some files which are around 1 GB. when we read those files it results in OOM errors in the Databricks cluster. So we thought of breaking the huge file into smaller chunks But we are unable to find a way to break them.Any idea how to break those to smaller chunks and read them?