1

1 - I have a dataframe in pandas; a table in ms sql with primary keys and specific column types. The connection to data is established with sqlalchemy. When I use to_sql method to upload dataframe with if_exists='replace' the data is uploaded, but all the primary keys, and column data types are lost. Everything becomes varchar(max), which I don't want. When I use if_exists='append', jupyter notebook crashes. I made sure the int type is int8 in the dataframe, because I use tinyint in ms sql, but that also did not solve the problem.

2 - How can we upload data with pandas to_sql method into a new table with specific column types? So how can we specify tinyint, smallint, primary keys etc.?

I use pandas version 0.22 and anaconda 5.1 (The most up-to-date as of Feb 2018)

ilyas
  • 609
  • 9
  • 25

0 Answers0