0

Following example from Azure team is using Apache Spark connector for SQL Server to write data to a table.

Question: How can we execute a Stored Procedure in an Azure Databricks when using Apache Spark Connector?

    server_name = "jdbc:sqlserver://{SERVER_ADDR}"
    database_name = "database_name"
    url = server_name + ";" + "databaseName=" + database_name + ";"
    
    table_name = "table_name"
    username = "username"
    password = "password123!#" # Please specify password here
    
    try:
      df.write \
        .format("com.microsoft.sqlserver.jdbc.spark") \
        .mode("overwrite") \
        .option("url", url) \
        .option("dbtable", table_name) \
        .option("user", username) \
        .option("password", password) \
        .save()
    except ValueError as error :
        print("Connector write failed", error)
nam
  • 21,967
  • 37
  • 158
  • 332
  • 1
    Does this answer your question? [JDBC connection from Databricks to SQL server](https://stackoverflow.com/questions/63065607/jdbc-connection-from-databricks-to-sql-server) – David Browne - Microsoft May 02 '22 at 20:52
  • Does this answer your question? [How to run stored procedure on SQL server from Spark (Databricks) JDBC python?](https://stackoverflow.com/questions/66670313/how-to-run-stored-procedure-on-sql-server-from-spark-databricks-jdbc-python) – Alex Ott May 03 '22 at 06:12
  • 2nd link shows how to do that from PySpark – Alex Ott May 03 '22 at 06:12
  • @DavidBrowne-Microsoft David, I'm using `python` and have no knowledge of scala. I was wondering if there a python version of your suggested solution. – nam May 08 '22 at 21:48
  • 1
    If the ODBC driver is installed you can use pyodbc. But the scala is boilerplate. – David Browne - Microsoft May 09 '22 at 00:27

0 Answers0