I am using databricks on azure,
Pyspark reads data that's dumped in azure data lake storage [adls]
Every now and then when i try to read the data from adls like so:
spark.read.format('delta').load(`/path/to/adls/mounted/interim_data.delta` )
it throws the following error
AnalysisException: `/path/to/adls/mounted/interim_data.delta` is not a Delta table.
the data necessarily exists
the folder contents and files show up when i run
%fs ls /path/to/adls/mounted/interim_data.delta
right now the only fix is to re run the script that populated the above interim_data.delta table which is not a viable fix