While searching for a service to migrate our on-premise MongoDB to Azure CosmosDB with Mongo API, We came across the service called, Azure Data Bricks. We have total of 186GB of data. which we need to migrate to CosmosDB with less downtime as possible. How can we improve the data transfer rate for that. If someone can give some insights to this spark based PaaS provided by Azure, It will be very much helpful. Thank you
Asked
Active
Viewed 263 times
1 Answers
0
Have you referred the article given on our docs page?
In general you can assume the migration workload can consume entire provisioned throughput, the throughout provisioned would give an estimation of the migration speed. You could consider increasing the RUs at the time of migration and reduce it later.
The migration performance can be adjusted through these configurations:
Number of workers and cores in the Spark cluster
maxBatchSize
Disable indexes during data transfer

Sajeetharan
- 216,225
- 63
- 350
- 396
-
Yes I have seen that article. Initially we planned to do the task with Azure DMS. The reason why we kept it aside is currently it will not support a server less cosmosDB as a destination. We want our data in a Server less CosmosDB. I don't know weather we can increase the RU for that. It varies based on the usage. isn't it. Azure data bricks is the only way for us to do the job. But getting the job done in a feasible way is the concern. – Jithin Variyar Oct 21 '21 at 07:28