1

I want to use Unison to sync 2 file systems of about 2.5TB, but every time I try to sync, the host on which I run the job kills it (OoM kill) because it uses a ridiculous amount of memory. (The host runs Ubuntu and has 6GB RAM and 2GB SWAP)

Is there any automated way to make Unison either not use as much RAM, or split the job in to multiple tasks, without the risk of missing parts of the data?

I can manually create multiple prf files, each with their own set of paths, but that means I have to make sure that every time ANY person makes a new folder or a new file in the root, the prf files get modified, which is a recipe for disaster, as you'll always find that exactly that important bit of data was not copied because a path entry wasn't made...

The path parameter is (for unclear reasons) the only Unison parameter that takes path/file information, but does not support regex or wildcards.

(I've tried google and duckduckgo, but was unable to find anything useful.)

  • did you try the makers of the software? I've never heard of Unison before now, I doubt I'm alone. Contact the Unison devs, maybe, or check their mailing list – JDS Sep 19 '18 at 20:33
  • From what I've seen of Unison, you won't be able to accomplish this with *only* Unison. I would try to limit the amount of memory that the Unison process can access on the host system. Then (hopefully) Unison will just run more slowly, and not just give up and quit because it *needs* more memory. See [this post](https://unix.stackexchange.com/q/44985/74616) and also [this post](https://superuser.com/q/134269/228569). And if you figure out a solution, please be sure to type it up as an answer here for future users of Unison who run into the same problem. :) – Mike Pierce Sep 20 '18 at 05:40

0 Answers0