Im using synapse in azure. I have data in the serverless sql pool. I want to import that data to a dataframe in databricks.
I am getting the following error:
Py4JJavaError: An error occurred while calling o568.load.
: java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.sqldw. Please find packages at http://spark.apache.org/third-party-projects.html
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:656)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:195)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:168)
at sun.reflect.GeneratedMethodAccessor102.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.databricks.spark.sqldw.DefaultSource
...
...
...
The pyspark code i am using is:
spark.conf.set(
"fs.azure.account.key.adlsAcct.blob.core.windows.net",
"GVk3234fds2JX/fahOcjig3gNy198yasdhfkjasdyf87HWmDVlx1wLRmu7asdfaP3g==")
sc._jsc.hadoopConfiguration().set(
"fs.azure.account.key.adlsAcct.blob.core.windows.net",
"GVk3234fds2JX/fahOcjig3gNy198yasdhfkjasdyf87HWmDVlx1wLRmu7asdfaP3g==")
df = spark.read \
.format("com.databricks.spark.sqldw") \
.option("url","jdbc:sqlserver://synapse-myworkspace-ondemand.sql.azuresynapse.net:1433;database=myDB;user=myUser;password=userPass123;encrypt=false;trustServerCertificate=true;hostNameInCertificate=*.sql.azuresynapse.net;loginTimeout=30;") \
.option("tempdir", "wasbs://projects@adlsAcct.dfs.core.windows.net/Lakehouse/tempDir") \
.option("forwardSparkAzureStorageCredentials","true") \
.option("dbtble","tbl_sampledata") \
.load()
I can confirm:
- Firewall setting to allow azure services to connect is configured.
- User has access to the sql serverless pool database.
- i have tried with integrated auth and i get the same result.
To my eye, the error looks like databricks cannot find the format com.databricks.spark.sqldw, but that could be a red herring.
appreciate any advise and expertise