I have a table in a Azure SQL Database which contains approximately 10 cols and 1.7 million rows. There data in each cell is mostly null/varchar(30).
When running a dataflow to a new table in Dataverse, I have two issues:
- It takes around 14 hours (around 100k rows or so per hour)
- It fails after 14 hours with the great error message (**** is just some entity names I have removed):
Dataflow name,Entity name,Start time,End time,Status,Upsert count,Error count,Status details
****** - *******,,1.5.2021 9:47:20 p.m.,2.5.2021 9:51:27 a.m.,Failed,,,There was a problem refreshing >the dataflow. please try again later. (request id: 5edec6c7-3d3c-49df-b6de-e8032999d049).
****** - ,,1.5.2021 9:47:43 p.m.,2.5.2021 9:51:26 a.m.,Aborted,0,0,
Table name,Row id,Request url,Error details
*******,,,Job failed due to timeout : A task was canceled.
- Is it really so that this should take 14 hours :O ?
- Are there any verbose logging I can enable to get a more friendly error message?