0

I have a table in a Azure SQL Database which contains approximately 10 cols and 1.7 million rows. There data in each cell is mostly null/varchar(30).

When running a dataflow to a new table in Dataverse, I have two issues:

  1. It takes around 14 hours (around 100k rows or so per hour)
  2. It fails after 14 hours with the great error message (**** is just some entity names I have removed):

Dataflow name,Entity name,Start time,End time,Status,Upsert count,Error count,Status details

****** - *******,,1.5.2021 9:47:20 p.m.,2.5.2021 9:51:27 a.m.,Failed,,,There was a problem refreshing >the dataflow. please try again later. (request id: 5edec6c7-3d3c-49df-b6de-e8032999d049).

****** - ,,1.5.2021 9:47:43 p.m.,2.5.2021 9:51:26 a.m.,Aborted,0,0,

Table name,Row id,Request url,Error details

*******,,,Job failed due to timeout : A task was canceled.

  1. Is it really so that this should take 14 hours :O ?
  2. Are there any verbose logging I can enable to get a more friendly error message?
  • I did ADF integraion into Dataverse when it was called CDS (as the back end of Dynamics 365). I assume it's still terrible and slow despite the name change. What I did learn is that you should create five _Non-Interactive_ users and load data in parallel through those five users. I also learnt the that `contact` and `account` entities were definitely the slowest. – Nick.Mc May 05 '21 at 09:49
  • BTW your error is "Job failed due to timeout " and if this always happens after 14 hours or so, then it's a hard timeout limit. The only solution is to... load less or load it faster. – Nick.Mc May 05 '21 at 09:53

0 Answers0