I have a product that does online backups using rdiff. What currently happens is:
Copy the file to a staging area (so the file won't disappear or be modified while we work on it)
Hashes the original file, and computes an rdiff signature (used for delta differencing) Computes an rdiff delta difference (if we have no prior version, this step is skipped)
Compresses & encrypts the resulting delta difference
Currently, these phases are performed distinctly from one another. The end result is we iterate over the file multiple times. For small files, this is not a big deal (especially given disk caching), but for big files (10's or even 100's of GB) this is a real performance-killer.
I want to consolidate all of these steps into one read/write pass.
To do so, we have to be able to perform all of the above steps in a streaming fashion, while still preserving all of the "outputs" -- file hash, rdiff signature, compressed & encrypted delta difference file. This will entail reading a block of data from the source file (say, 100k?), then iterating over the file in memory to update the hash, rdiff signature, do delta differencing, and then write the output to a compress/encrypt output stream. The goal is to greatly minimize the amount of disk thrashing we do.
Currently we use rdiff.exe (which is a thin layer on top of an underlying librsync library) to calculate signatures and generate binary deltas. This means these are done in a separate process, and are done in one-shot instead of a streaming fashion.
How can I get this to do what I need using the librsync library?