1

With the announcing of change data capture in ADF comes various questions. I tried hand's on the same, and came across various scenarios.

  • Implemented multiple tables from source to target, where source was On-premises SQL Server and sink was Azure SQL Database.
  • In monitor tab I tried to read changes read and write but didn't get how that are counted while INSERT, UPDATE, DELETE operation.
  • If I'm inserting the single data in the source table, the changes read in the monitor tab is displaying 4 changes.
  • And when I perform Delete operation, that change is not read and written.

So, overall, I'm facing difficulty how the changes count are calculated. Can anybody explain this process of count calculation.

Please find below the screenshot for the same:- (https://i.stack.imgur.com/iLtT5.png)

Flak DiNenno
  • 2,193
  • 4
  • 30
  • 57

1 Answers1

1
  1. To support upsert/delete operations you need to choose keys columns in column mapping, can you try selecting these options and try.
  2. In Monitoring tab, currently we aggregate the entire changes read/written across sources/sinks.

enter image description here

  • Can you please explain it more. As this is new and not able to understand properly. I read the documentation regarding but no satisfactory explanation. In this preview of CDC, it take some random file to capture CDC when more than one files are defined in a folder (CSV file in Azure Blob storage) and that folder is pointed to CDC source. Thanks – Robin Jan 25 '23 at 16:20
  • As you already know dataflows support diff sources with CDC which helps in capturing only change in data from sources instead of entire data. To use it, currently user need to be aware of dataflows, datasets, pipeline and triggers. With this new CDC resource, we are abstracting all of it and user just needs to specify the source/sink details and ready to move the data from source to destination without much hustle and knowledge of other artifacts. – GAJJALA VISHNUVARDHAN Jan 25 '23 at 17:51