0

We're a medium sized business with about 50 different geographical sites of varying sizes (infrastructure/user/server wise).

Today we have local backupservers on most sites, that take filelevel full and differential backups of the servers in their site. The data is backed up to robotic libraries or duplicated simple NAS solutions.

About 90% of the servers we use are Vmware virtual machines. 95% of VM's are Microsoft Windows OS (2008 R2 or 2012)

We're planning to consolidate all backups to a central site, taking all backup over WAN (VPN tunnels to our central site). Most sites have fiber based broadband > 50MBps connections to the internet. We're planning to use an agent based backup solution that have agents on all source servers. (Wheter they are ESXi based or physical servers).

The Storage solution for storing backups we plan to use are Linux servers using ZFS on Linux servers with JBOD arrays. The total size of the protected data will be somewhere in the region of 100TB.

We want to use Windows 2012 servers as Backup Application and database Servers.

Naturally we will have to use some kind of a "forever Incremental" or "Synthesized incremental" backup scheme, since 50MBps will not be able to account for full backups.

Another concern we have is that a full restore will take to much time to a remote site. Therefore we are thinking about implementing a solution that can store the last syntesized full backup locally on the remote site.

The Question is: What kind of backup Software do you recommend for this setup? Our feeling is that we're somewhere in the middle between a SMB solution and an enterprise solution, and we're having a tough time picking "candidates" for the job.

Bozzor
  • 1
  • 2

1 Answers1

0

Obviously there are several solutions for this type of issue.

You can Roll-Your Own or Buy in a Package depending on your budget.

Roll-Your-Own can be rather painful to implement - but once working is usually easy to maintain - however if each server is 50Gb then backing up multiple machines will not be practical in terms of disk usage and IO Bandwidth.

Commercial Solutions

I have seen (and used) several packages - but the best for this are ones that offer

  1. Data Reduplication
  2. Good Data Compression on the fly
  3. Incremental Archive restore

All of the Major players - EMC, IBM, Veritas, Microsoft, Oracle - have products in this space.

You should probably look at tape with a D2D storage facility - as your data pool will become quite large.

Tim Seed
  • 209
  • 1
  • 3
  • We would really like to use our own Linux ZFS storage for the backup solution, since we already have the expertise, and feel we could handle most eventualities with this soluton. So a packaged solution with both software and storage is not our preferred choice. – Bozzor Jan 29 '16 at 14:11
  • Ok - I can understand that.... but you mentioned that your Production Servers are of size "100TB" - with 50 sites ... it is only 2Tb on average. Are the VM Clients simply clones of 1 specific implementation ? Or is each server **unique** (Forgetting IP Addresses etc). I suspect ZFS's native reduplication will be thwarted by the fact you are backing up a VM (layers within Layers). Have a look at this [Veritas DeDup] (http://www.veritas.com/community/forums/vmware-esxi-55-vm-backup-deduplication-option) – Tim Seed Jan 29 '16 at 14:26