I'm trying to copy a SQL table from one database to another database in another server using Databricks. I have heard that one method of doing this is by using pyodbc because I need to read the data from a stored procedure; JDBC does not support reading from stored procedures. I want to use code similar to the one below:
import pyodbc
conn = pyodbc.connect( 'DRIVER={ODBC Driver 17 for SQL Server};'
'SERVER=mydatabe.database.azure.net;'
'DATABASE=AdventureWorks;UID=jonnyFast;'
'PWD=MyPassword')
# Example getting records back from stored procedure (could also be a SELECT statement)
cursor = conn.cursor()
execsp = "EXEC GetConfig 'Dev'"
conn.autocommit = True
cursor.execute(execsp)
# Get all records
rc = cursor.fetchall()
The question is, once I get the data into the rc variable using pyodbc, should I bother moving the data into a Databricks Dataframe, or should I just push the data out to my destination?