16

I have read that copying the data directory will work. But, that is a combination of logs and snapshots. How do folks backup a zookeeper infrastructure ? Export ? Dump ? Custom script ? What are the best practices ?

Krishna Sankar
  • 3,717
  • 2
  • 17
  • 13

6 Answers6

22

Zookeeper writes a snapshot once it determines that it has enough transactions and every new snapshot completely supersedes older ones. So the latest snapshot + the transaction log from the time of the snapshot is enough to recover to current state. To make the calculations easier, you can simply backup the last 3 snapshots(in case of corruption of the latest snap) and the transaction logs from the timestamp corresponding to the earliest snapshot. The links below have some more details.

  1. http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_dataFileManagement
  2. http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
manku
  • 1,268
  • 10
  • 9
  • 1
    Starting from version `3.4.0` you can use the `autopurge.snapRetainCount` and `autopurge.purgeInterval` configuration directives to keep your snapshots and transaction logs clean. Now you just need a cronjob which makes a backup of the data directory (`dataDir`). – czerasz Jul 11 '15 at 09:42
8

There's a very nice tool called zk-shell that can do an enormous amount of things with Zookeeper. It has a mirror command that can copy an entire Zookeeper tree recursively to/from Zookeeper or local JSON file.

Source & documentation: https://github.com/rgs1/zk_shell

Installation on Centos 7:

yum install python2-pip
pip install zk_shell

Example to back up a zookeeper tree to a local JSON file /tmp/zookeeper-backup.json:

zk-shell localhost:2181 --run-once 'mirror / json://!tmp!zookeeper-backup.json/'
Onnonymous
  • 1,391
  • 1
  • 10
  • 7
4

I just had the same requirement and found that most of the available options either don't work or require a lot of customisation.

The best option I found was Guano which is a little Java app that visits each node in the tree recursively starting from the declared node and dumps it into a matching directory structure, so you end up with a directory structure of plain files that is structured like the actual tree.

You can also restore these backups by asking it to restore recursively from any point in that tree. I think this is quite nice both for backups and for exploration. For example I immediately used ack from the root to find all files with an entry I cared about.

This is easy to extend into a proper backup simply by putting it as a cron job and adding a zip step to compress the whole backup into an archive as well as handle any rotation needed.

There are a few downsides to the tool:

  1. As it stands on Github the original does not compile due to missing a few imports. Several people have made PRs or forks that fix this issue such as https://github.com/feldoh/guano which is my fork wherein I also improved the docs. I have also now pre-compiled the jar and will be pushing binaries into https://bintray.com/feldoh/Guano/guano.
  2. It dumps the data only, which is good for exploration but loses meta-data such as the mTime or the data version. Admittedly a restore probably should count as an update so I can't say its really a bad thing, but its not a true point-in-time restore.

NB: I have made my own Zookeeper editor as I had similar problems finding one of those that worked and met my needs. Depending on when you read this https://github.com/feldoh/JZookeeperEdit may also have an export feature. Issues 13/14 cover this planned feature.

feldoh
  • 648
  • 2
  • 6
  • 19
4

Netflix provided a solution for this called exhibitor. It's a "ZooKeeper co-process for instance monitoring, backup/recovery, cleanup and visualization."

janisz
  • 6,292
  • 4
  • 37
  • 70
mbdvg
  • 2,614
  • 3
  • 21
  • 39
  • 3
    Netflix Exhibitor is a supervisor for Zookeeper and good to maintain the ensemble. BUT it does not handle snapshots backuping - only transaction logs - so you can only restore the transactions one by one, not the entire data at once. Not a suitable solution for zk data storage with a lot of persistent (not ephemeral) nodes. See here: https://mail-archives.apache.org/mod_mbox/zookeeper-user/201307.mbox/%3C806A441F-F20F-4D96-AD79-E338E9FC08D9@jordanzimmerman.com%3E – Nikita Mendelbaum Mar 01 '16 at 10:48
0

Please consider use https://github.com/boundary/zoocreeper. Carefull with another tool, like burry.sh or zk_shell. Those will snapshot old ephemeral znode and restore it as persistent znode of your new cluster, which be lead to coordinator problem.

For more info : What is the use case of an ephemeral znode of zookeeper?

-1

We're modifying the zkConfig.py script which is a contributed project when you install zookeeper. It lets you interact with zookeeper through a python script.

We're modifying the scripts to easily dump and save the entire contents each night and then backup the files. Though I would be curious to hear other people's solutions to this as well.

meverett
  • 921
  • 5
  • 6