1

Scenario:

1.I am trying to insert the dataframe directly into SQL Table.

engine_azure = sqlalchemy.create_engine(sqlchemy_conn_str,echo=True,fast_executemany = True, poolclass=NullPool)
conn = engine_azure.connect()
df_final_result.to_sql('Employee', engine_azure,schema='dbo', index = False, if_exists = 'replace')

2.So is there any alternative to the above .to_sql using pypyodbc connection?

3.Below code we can use but i have 90 columns, so i want to avoid code with below iteration.

import pyodbc
cnxn = pyodbc.connect(driver='{ODBC Driver 17 for SQL Server}', server='xyz', database='xyz',               
               trusted_connection='yes'
cursor = cnxn.cursor()
for index, row in df2.iterrows():
     cursor.execute("INSERT INTO Employee(ContactNumber,Name,Salary,Address) values(?,?,?,?,?,?)", row.ContactNumber, row.Name, row.Salary,row['Address'])

cnxn.commit()
cnxn.close()
Rahul
  • 21
  • 7

0 Answers0