1

I'm using s3cmd 1.1.0beta to upload files that are larger than 5 GB to Amazon S3. This is because s3cmd older than 1.1.0 is not able to upload files larger than 5 GB (Amazon single-part upload limit), and the latest beta version is able to upload these files to S3 using multi-part upload.

The problem is: I am not able to perform ANY operation on the files larger than 5 GB uploaded through s3cmd 1.1.0. I suspect that this may be happening because Etag set by s3cmd does not match the Etag that Amazon expects:

The specific problems are as follows (both through the web console):

  1. When I try to copy these files from one bucket to another, Amazon S3 complains: "The following objects were not copies due to errors from: "
  2. When I try to change any properties on these files, S3 complains: "The additional properties were not enabled or disabled due to errors for the following objects in:"

Is there any way to fix the Etags in the larger-than-5-GB-files so that I am able to perform operations on these files?

Suman
  • 9,221
  • 5
  • 49
  • 62

1 Answers1

2

OK, after some investigation, I found that the problem has to do with Amazon S3's inability to natively handle files that are larger than 5 GB in size.

In order to copy or do any operation on a file larger than 5 GB in size, you have to specifically use Amazon's multi-part upload and related APIs for working on large files.

Apparently, even Amazon's AWS web console uses only the simple APIs, which work only on files that are less than 5 GB in size, so if you want to do anything with files larger than 5 GB in size, you need to write your own code with the AWS API to operate on those files!

Suman
  • 9,221
  • 5
  • 49
  • 62