2

I'm trying to copy a table from the Redshift database to a dataframe in Python and then save it again in Redshift.

So, the first step is working but I have some problems with the second step. I get some errors when I'm trying to save a dataframe which has 100 rows.

import pandas as pd
from sqlalchemy import create_engine

engine = create_engine("mssql+pyodbc://database")
df = pd.read_sql_query('select * from testing.table1 limit 100', engine)
df.to_sql(name='table2',schema='testing',con=engine,index=False,if_exists='append')

And I'm getting this error:

DBAPIError: (pyodbc.Error) ('HY000', '[HY000] [Amazon][ODBC] (10920) No data can be obtained from input parameter whose value has already been pushed down.

It's strange because when I'm trying to save a dataframe which has 10 rows there is no error at all.

0 Answers0