I'm looking for a filesystem that can replicate over long distances, and tolerate being offline for extended periods of time by using a local buffer, which should be a disk buffer, to queue up changes to replicate.
DRBD was an ideal candidate with DRBD Proxy, but it buffers in RAM. I'm not sure that will be adequate.
I'm trying to avoid things like Ceph which have much more functionality than needed.
It should handle on the order of a billion files on a single filesystem, and need only replicate from filesystem A to filesystem B. There will be a lot of files, but they will only be written and not changed. A moderate amount of data will be written all the time, but not so much that it won't be perfectly feasible for replication to catch up after even a few days of being offline. No clustering or anything fancy like that is required.
Really, what I'm looking for is something that works like MySQL replication, but for a file system.
I found a lot of commentary on replicating file systems, but for me the missing piece is being able to buffer updates to disk if the link is down for an extended period.