0

I am trying to convert a csv file (consist of 40 headers and 4000 rows) into a table in sql database server. I read somewhere that I can do it in 2 lines as simple as:

df = pandas.read_csv("file.csv")
df.to_sql(tableName, conn)

I tried to make conn as follow:

conn = pyodbc.connect('DRIVER={ODBC Driver 13 for SQL Server}; SERVER=servername; DATABASE=testdb; Trusted_Connection=yes')

but it failed and I get these errors:

DatabaseError: Execution failed on sql 'SELECT name FROM sqlite_master WHERE type='table' AND name=?;': ('42S02', "[42S02] [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]Invalid object name 'sqlite_master'. (208) (SQLExecDirectW); [42S02] [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]Statement(s) could not be prepared. (8180)")

Apparently, I have not defined conn properly. But I don't know how to correct it.

Amy2020
  • 1
  • 4

1 Answers1

0

As noted in the DataFrame.to_sql documentation, the second argument (con) is a "sqlalchemy.engine.Engine or sqlite3.Connection":

Using SQLAlchemy makes it possible to use any DB supported by that library. Legacy support is provided for sqlite3.Connection objects.

So you'll need to add SQLAlchemy to your project, use its create_engine method to create an engine object that points to your database, e.g.,

from sqlalchemy import create_engine
# ...
your_sql_engine = create_engine('mssql+pyodbc://@your_DSN_name')

and then pass your_sql_engine as the second argument to pandas' to_sql method.

Gord Thompson
  • 116,920
  • 32
  • 215
  • 418