1

Company is in the process of switching from hosting MongoDB on windows server 2012 to hosting on linux(ubuntu 14.04)

Current backup and restore strategy involves copying all the data files via Robocopy to an NFS share on a windows server 2012 machine, and then copying from the NFS share to the target machine.

I am brand new to linux, and I am trying to get the most performance out of this copy operation. Its approximately 325 2GB files. I mounted the NFS share to the linux box so I can reference it as a local disk.

I have tried cp and rsync and find both to be incredibly slow.

Currently Robocopy will complete the operation on my network in about 2.5 hours, rsync is closer to 4.5 hours and cp is around 3.5 hours.

Is there any better way that I can be doing this?

trocheaz
  • 11
  • 3
  • Wouldn't a one-step copy using filezilla (or any other sftp client) from the current server to the linux server be faster (as it eliminates one of the copies)? Is this a one-off operation or something that you expect to be doing repeatedly? – Paul Haldane Jan 11 '15 at 19:04
  • currently its a repeated automated process that occurs once a week – trocheaz Jan 12 '15 at 12:45

1 Answers1

0

If you are mounting a network drive that will be slow. Use rsync -aud from windows to linux it will take less than 1 minute after the first run for many TB (assuming neg monthy changes).

If you want versioning use rdiff-backup after the rsync. Other options;

http://en.m.wikipedia.org/wiki/List_of_backup_software

user1133275
  • 219
  • 1
  • 11