0

I'm currently trying to move my existing MongoDb deployment to the --directoryperdb option to make use of different, mounted volumes. In the docs this is achieved via mongodump and restore, however on a large database with over 50GB compressed data this takes a really long time and indices need to be completely rebuilt.
Is there a way to move the WiredTiger files inside the current /data/db path, just how you would when just changing the db path? I've tried copying the corresponding collection- and index- files into their subdirectory, which doesn't work. Creating dummy collections and replacing them with the old files and then running --repair works but I don't know how long it takes since I only tested it with a few docs large collection, this also seems very hacky with a lot of things that could go wrong (for example data loss).

Any advice on how to do this, or is this something that simply should not be done?

nn3112337
  • 459
  • 6
  • 21
  • If this is a replica set you might try to wipe and resync one of the nodes – Joe Jun 29 '20 at 16:14
  • I do not expect it to be possible for you to move the files externally to the database. – D. SM Jun 29 '20 at 16:48
  • +1 to what Joe said except I would suggest provisioning an additional node and configuring it with directory per db. – D. SM Jun 29 '20 at 16:49
  • Alright this definetly works, however indexes etc. still need to be rebuilt. Is there a performance gain between this and using dump/restore? – nn3112337 Jun 29 '20 at 18:49

0 Answers0