Is used a little Py Spark code to create a delta table in a synapse notebook.
partial code:
# Read file(s) in spark data frame
sdf = spark.read.format('parquet').option("recursiveFileLookup", "true").load(source_path)
# Create new delta table with new data
sdf.write.format('delta').save(delta_table_path)
but now I want to use a different Synapse notebook with Spark SQL to read that delte table (incl history) that is stored in my data lake gen. I tried the createOrReplaceTempView option but that is not allowing me to see the history.
Partial code (block 1)
%%pyspark
ProductModelProductDescription = spark.read.format("delta").load(f'abfss://{blob_account_name}@{container}/Silver/{table}')
ProductModelProductDescription.createOrReplaceTempView(table)
partial code (block 2)
SELECT * FROM ProductModelProductDescription
part code (block 3)
DESCRIBE HISTORY ProductModelProductDescription
This gives an error: Table or view 'productmodelproductdescription' not found in database 'default'
In the video from Synapse they show how to work with history, but it doesn't show where the table is stored or how that table is created. It's alread there at the beginning. https://www.youtube.com/watch?v=v1h4MnFRM5w&ab_channel=AzureSynapseAnalytics
I can create a DeltaTable object in pySpark
%%pyspark
# Import modules
from delta.tables import DeltaTable
from notebookutils import mssparkutils
path = 'abfss://mysource@mydatalake.dfs.core.windows.net/Silver/ProductModelProductDescription'
delta_table = DeltaTable.forPath(spark, path)
But not sure how to continue in SPARK SQL with this object