Is there a another automated way of syncing two Amazon S3 bucket besides using s3cmd? Maybe Amazon has this as an option? The environment is linux, and every day I would like to sync new & deleted files to another bucket. I hate the thought of keeping all eggs in one basket.
-
Why don't you want to use s3cmd? – blahdiblah Dec 07 '11 at 02:29
-
A good use case for mirroring buckets is cross-region failover or disaster recovery. Netflix practices this for example. You keep your buckets in sync across regions so you have a warm copy ready in case of outages. – alph486 Oct 31 '14 at 18:46
-
@DerrickPetzold another great use-case (which is exactly why I find myself on this page) is that our operations have matured enough that it's now both feasible and desirable to have our [internal] demo environment rebuild and include a copy of the production database, which would be useless without a copy of the production blobs stored in S3. – Chris Tonkinson May 11 '17 at 17:08
4 Answers
You could use the standard Amazon CLI to make the sync. You just have to do something like:
aws s3 sync s3://bucket1/folder1 s3://bucket2/folder2

- 713
- 1
- 17
- 26

- 159
- 1
- 2
S3 buckets != baskets
From their site:
Data Durability and Reliability
Amazon S3 provides a highly durable storage infrastructure designed for mission-critical and primary data storage. Objects are redundantly stored on multiple devices across multiple facilities in an Amazon S3 Region. To help ensure durability, Amazon S3 PUT and COPY operations synchronously store your data across multiple facilities before returning SUCCESS. Once stored, Amazon S3 maintains the durability of your objects by quickly detecting and repairing any lost redundancy. Amazon S3 also regularly verifies the integrity of data stored using checksums. If corruption is detected, it is repaired using redundant data. In addition, Amazon S3 calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data.
Amazon S3’s standard storage is:
- Backed with the Amazon S3 Service Level Agreement.
- Designed to provide 99.999999999% durability and 99.99% availability of objects over a given year.
- Designed to sustain the concurrent loss of data in two facilities.
Amazon S3 provides further protection via Versioning. You can use Versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. This allows you to easily recover from both unintended user actions and application failures. By default, requests will retrieve the most recently written version. Older versions of an object can be retrieved by specifying a version in the request. Storage rates apply for every version stored.
That's very reliable.

- 33,069
- 21
- 98
- 152
I'm looking for something similar and there are a few options:
- Commercial applications like: s3RSync. Additionally, CloudBerry for S3 provides Powershell extensions for Windows that you can use for scripting, but I know you're using *nix.
- AWS API + (Fav Language) + Cron. (hear me out). It would take a decently savvy person with no experience in AWS's libraries a short time to build something to copy and compare files (using the ETag feature of the s3 keys). Just providing a source/target bucket, creds, and iterating through keys and issuing the native "Copy" command in AWS. I used Java. If you use Python and Cron you could make short work of a useful tool.
I'm still looking for something already built that's open source or free. But #2 is really not a terribly difficult task.
EDIT: I came back to this post and realized nowadays Attunity CloudBeam is also a commercial solution many folks.

- 1,209
- 2
- 13
- 26
It is now possible to replicate objects between buckets in two different regions via the AWS Console.
The official announcement on the AWS blog explains the feature.

- 1,159
- 1
- 9
- 14
-
This is great as long as you don't need automation, unfortunately OP specifies the sync needs to be automated. – Chris Tonkinson May 11 '17 at 17:10
-
@ChrisTonkinson Cross region replication of bucket contents is automated. Announcement states: "Once enabled, every object uploaded to a particular S3 bucket is automatically replicated to a designated destination bucket located in a different AWS region." – ljcundiff May 15 '17 at 17:36