**1- insert rows into table of Azure SQL Databse with python 3. **
Azure Databricks has installed the JDBC driver. We can use JDBC driver to write data to SQL Server with a Dataframe. For more details, please refer to here.
For example
jdbcHostname = "<hostname>"
jdbcDatabase = ""
jdbcPort = 1433
jdbcUrl = "jdbc:sqlserver://{0}:{1};database={2}".format(jdbcHostname, jdbcPort, jdbcDatabase)
connectionProperties = {
"user" : jdbcUsername,
"password" : jdbcPassword,
"driver" : "com.microsoft.sqlserver.jdbc.SQLServerDriver"
}
#write
df=spark.createDataFrame([(1, "test1"),(2,"test2")],["id", "name"])
df.write.jdbc(url=jdbcUrl,table="users",mode="overwrite",properties=connectionProperties)
#check
df1 = spark.read.jdbc(url=jdbcUrl, table='users', properties=connectionProperties)
display(df1)

2- Create Table from Dataframe
If you want to create a DataBricks table from datafarme, you can use the method registerTempTable
or saveAsTable
.
registerTempTable creates an in-memory table that is scoped to the cluster in which it was created. The data is stored using Hive's highly-optimized, in-memory columnar format.
saveAsTable creates a permanent, physical table stored in S3 using the Parquet format. This table is accessible to all clusters including the dashboard cluster. The table metadata including the location of the file(s) is stored within the Hive metastore.
For more details, please refer to here and here.