-1

I have a DRBD resource, Primary and Secondary (EXT4 - No LVM)

What is the more suitable way to do a backup on the DRBD resource with compressed data and incremental backup?

voretaq7
  • 79,879
  • 17
  • 130
  • 214
Jose Nobile
  • 461
  • 1
  • 5
  • 14

2 Answers2

2

As long as it's online, you can't do a reliable backup of a DRBD secondary device , as you can't even mount the file system on it read-only. Even it it would let you mount the device (it doesn't as long as it's secondary and online), you would never get a consistent view of the file system as the ext4 driver of the secondary node has no way of seeing updates happening on the file system and it's view of things can be outdated really fast on a busy file system.

Edit: In theory, the following could work:

  • fsync the file system
  • take the secondary node offline before any more changes happen on the FS. Doing this reliably without taking the primary node offline can be difficult.
  • Make a backup any way you want.
  • Take the node online again and resync.

This is a stupid approach though, as your mirror is use- and worthless during the backup and if something happens to the primary node during this, you are screwed - why bother with DRBD in the first place if you destroy your mirror in regular interval?

Sven
  • 98,649
  • 14
  • 180
  • 226
  • Is possible a backup at byte level? without mounting the filesystem? Disconnecting the nodes and backup the secondary node I believe that as make a backup from a computer shutdown from energy source, right? Not highly reliable, but is a acceptable backup, I believe. – Jose Nobile Sep 23 '13 at 22:14
  • 2
    See my edit. Bad idea, but I don't see why you don't want to make a regular backup on the primary node anyway. – Sven Sep 23 '13 at 22:24
  • The question is more valid for you if remove "secondary node", I need to do a backup on the DRBD resource, not matter if primary o secondary, the important is make a backup. I thought in the secondary node to not impact the performance of the primary and make a snapshot of the resource. – Jose Nobile Sep 24 '13 at 13:52
  • @JoseNobile You need to back up the contents of a filesystem. There are literally tens of thousands of tools that do this (every *NIX comes with [`dump`](http://linux.die.net/man/8/dump) and [`restore`](http://linux.die.net/man/8/restore), or you could use standard backup software like [Bacula](http://bacula.org)). As was pointed out in the answer you cannot use these on the secondary node (nor can you take a binary backup) because the contents are changing underneath you. What you really need to do is snapshot the primary & back up the snapshot. – voretaq7 Sep 24 '13 at 15:00
  • Thank you for your comments @voretaq7 The solution was: For MySQL the backup: innobackupex with the "wrapper" https://gist.github.com/jmfederico/1495347, this backup is incremental and compressed, all tables are in InnoDB then not block in backup process. I use Duplicity for backup the normal files (.php, .js, .css, .jpg), incremental and compressed. After the backup are synced to other server with rsync with a wrapper https://gist.github.com/rcoup/5358786 for to do the copy parallel. should I put this as an answer or What do about the "status" of the question? – Jose Nobile Sep 29 '13 at 01:22
0

Mount as read only a node is not reliable to make a backup, and this method require stop the mirror and resync after.

The solution was:

For MySQL:

innobackupex with the "wrapper" gist.github.com/jmfederico/1495347, this backup is incremental and compressed, all tables are in InnoDB then not block in backup process.

I use Duplicity for backup the normal files (.php, .js, .css, .jpg), incremental and compressed.

After the backup is copied to other server with rsync with a wrapper gist.github.com/rcoup/5358786 for to do the a parallel copy.

Jose Nobile
  • 461
  • 1
  • 5
  • 14