I have access to a repository where a team writes parquet file (without partitioning them), using delta (i.e there is a delta log in this repository). I have no access to the table itself though. To create a dataframe from those parquet, I am using the below code:
spark.read.format('delta').load(repo)
Executing this loads the entire dataframe, regardless of the delta log. How should I proceed to load the latest version of my data?