I am using Pandas to_sql to load a CSV with just about 6k rows of data. I have a very large Azure Synapse DW with more than enough capacity but i am having a problem that it will load about 1.5k rows and then fail with this out of memory error:
sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', '[42000] [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]112007;Exceeded the memory limit of 20 MB per session for prepared statements. Reduce the number or sizeof the prepared statements. (112007) (SQLExecDirectW)')
my code looks like this:
df_table.to_sql('Test_Name', engine, if_exists='replace',dtype={col_name: VARCHAR(length=100) for col_name in df_table},index=False, chunksize=1000)
I have tried to set a chunksize but seems to have no impact. Why is it struggling with such a tiny dataset and how can I load the full csv file?