We've used tar
to backup and compress (gzip) selected directories on our file server with very good results until recently.
Each and every one of our backups are stored on mirrored (RAID) harddrives and simultaneously uploaded to a Amazon S3 bucket for off-site storage.
As our data has grown rapidly in size recently, so has also our backups. This week, our backup uploads have run 24/7 constantly just to sync the fresh backups from the last 7 days and still hasn't finished. Getting a better connection would solve some of this problem (which we can't do at the moment), but I think that it should be better to create a real solution instead of going for a workaround.
What alternative strategy, that keeps us away from multiple-digit gigabyte files and still lets us use tar
, could we use to backup our directories that would reduce the amount of bandwidth needed to sync the files?