Currently I'm running a number of back-up servers to back-up other servers with, e.g. b01, b02, .., bn, all with their own IP's, running their own FTP / SSH services. However, I would like to create just one interface to store and retrieve backups, making it easier for myself and clients by always connect to the same host while the actual data is stored on multiple back-end servers, and also improve the scalability of the system.
I'm currently using ZFS with snapshots (and compression/dedup) to store the backups, each server which is backed-up has it's own volume (20-500G) on a ZFS backup server, which is snapshotted every day for retention.
Does there exist a program or technique to mount / simulate a directory from another (backup) server on an incoming FTP / SSH connection to the "connection server(s)"? It should be scalable and if possible redundant, I have been unable to find anything.
I'm also open to other solutions and entirely changing the current backup setup, but it has some requirements:
- Snapshot backups for retention, store differences only
- FTP / SSH (rsync) access
- If possible, apply some compression and/or deduplication to save disk space
- Scalable to hundreds of TB's
- Perform well
- Redundant
I have been exploring the possibility of using an object store like Openstack Swift, but snapshot are not possible.
Therefore my question is, how can I achieve my goal of creating some kind of back-up cluster with one interface, to replace the current setup existing of standalone servers.