I need help to ingest data in azure databricks sql warehouse database table from kafka in a batch job which needs to run every hour and only the new data in kafka should be synced in the databricks sql table.
Please let me know how this can be done.
I've tried using the spark read / readstream api using the delta live table in databricks, but no data is synced.