0

I have a pretty simple data migration, we're splitting some embedded documents out of one collection into their own collection. We have 140,000 records to create.

In testing on local mongo 3.4 with MMAPV1 the migration took about 20 minutes to run.

In production, the migration took over 4 hours to run!

I did some experiments, I switched my local storage to WT and imported the data and ran the migration again, it took about 4 hours, the only difference being the WT storage configuration (with the defaults including the snappy compression).

Loading the initial dataset is just as fast with MMAP or WT, so I suspect compression is not the issue, but I am curious if anyone knows why I would get such a drastically different result between the two data storage mechanisms

  • There's no good reason why you should see such a drastic difference. You'll need to share some more info about your production OS, hardware, deployment toplogy, and details about what exactly this migration process is doing. – helmy Apr 14 '17 at 17:34
  • Performance wise it shouldn't really get worse for most workloads using the newer storage engine. You might want to run [explain](https://docs.mongodb.com/v3.4/tutorial/analyze-query-plan/) on your queries on the old and the new instance, maybe you are missing some indexes and MMAPV1 was more forgiving in your case. – Udo Held Apr 15 '17 at 00:43

0 Answers0