I'm building my first Data Factory pipeline, a very basic one. I've a Data Flow with just source (csv flatfile) and sink (synapse table).
Source has 12 columns. So, I've created a table in Synapse (via SSMS) with all the 12 columns as varchar. No keys. Just a basic table. When I build the Data Flow activity, the previews of the data on both source and target looks perfect. But when I try to run (Debug) the pipeline, it just fails with the below error:
Operation on target load_sales_data failed: {"StatusCode":"DFExecutorUserError",
"Message":"at Sink 'Sales': java.sql.BatchUpdateException:
[Microsoft][ODBC Driver 17 for SQL Server][SQL Server]String or binary data would be truncated.
","Details":"at Sink 'Sales': java.sql.BatchUpdateException:
[Microsoft][ODBC Driver 17 for SQL Server][SQL Server]String or binary data would be truncated. "}
I just don't get it. I spent a lot of time trying to figure out what's wrong but I just don't get it. Can someone please tell me what am I doing wrong?