Tools like Areca Backup, Bacula (if you have a Linux server for backup storage), or the half-million commercial backup "solutions" can pretty much all be used to backup PostgreSQL.
Logical backups
Let an existing backup management product take care of retention, rotation, and scheduling. What you should be doing is providing a pre-backup script to plug into your backup product. This script will run pg_dumpall --globals-only
, and run pg_dump -Fc ...
for each DB, to save the DB contents to the dump directory. That's the usual process as shown in the regular backup/restore docs. Then back that up. In the post-backup script, delete the dump output, since it'll now be archived in the backup management tool's archives.
I've never seen a backup management product that doesn't have pre- and post- hooks.
Physical backups
You can use a similar strategy for physical backups, not just the logical backups above. The general process is documented, again, in the regular backup/restore docs. Integrating it into a backup manager is complicated by the need to copy the WAL generated after pg_stop_backup()
. Something like this:
- Make certain that WAL archiving is enabled and that the archive directory is readable from the host running the backups.
- In the pre-backup hook, connect to the DB and run
pg_start_backup()
.
- Let the backup product, which must support Volume Shadow Copy Service copying, copy the data directory including
pg_xlog
, all tablespaces, etc.
- In the post-backup hook, run
pg_stop_backup()
- Define another backup fileset that runs after the main DB backup. This fileset must back up all archived WAL up to the point in the backup label. Failure to do this will result in a backup you cannot restore.
If physical backups seem too complicated, stick to using logical backup dumps (pg_dump
, pg_dumpall
, etc) and use streaming replication / PITR + pg_basebackup
for your physical backups.