0

In my search for a reliable backup program i have come across rdiff-backup which seems like a good solution for me. To test it a little bit i did the following:

  • Create a directory a
  • Add three files to it (1.jpg, 2.jpg, 3.jpg)
  • Create a directory b
  • Create a backup by running rdiff-backup a b
  • Delete 3.jpg from directory b (let's assume accidental deletion, file corruption etc.)
  • Run backup again using rdiff-backup a b

Ok so here i would expect rdiff-backup to detect that a file is missing, warn, backup it again. But it doesn't.

So let's see if we can make sure the backup is alright:

  • Run rdiff-backup --verify b

This just hangs without giving any useful information even with -v 9

So my question is: Can you reproduce this ? If so, how the hell can this happen with a widely used backup tool ? This is the most basic thing i expect that the backup will detect that it no longer has a copy of a source file.

I really hope i'm simply overlooking something here...

joekr
  • 101

1 Answers1

0

This is indeed a problem in rdiff-backup and the worst part is that even if you notice it, it's not easy to fix. You need to either copy the file (cp -a) in the backup directory if it hasn't changed since the last backup or otherwise delete the reference to this file for the rdiff-backup metadata files in order to do the backup anew.

fim
  • 497
  • 2
  • 5
  • So with duplicity generating corrupt backups when a transfer is interrupted and rdiff-backup not noticing missing files what is everybody using ? – joekr Jul 08 '11 at 11:30