2

I am trying to set the Cache-Control Header on all our existing files in the s3 storage by executing a copy to the exact same key but with new metadata. This is supported by the s3 api through the x-amz-metadata-directive: REPLACE Header. In the documentation to the s3 api compatability at https://docs.developer.swisscom.com/service-offerings/dynamic.html#s3-api the Object Copy method is neither listed as supported or unsupported.

The copy itself works fine (to another key), but the option to set new metadata does not seem to work with either copying to the same or a different key. Is this not supported by the ATMOS s3-compatible API and/or is there any other way to update the metadata without having to read all the content and write it back to the storage?

I am currently using the Amazon Java SDK (v. 1.10.75.1) to make the calls.

UPDATE:

After some more testing it seems that the issue I am having is more specific. The copy works and I can change other metadata like Content-Disposition or Content-Type successfully. Just the Cache-Control is ignored.

As requested here is the code I am using to make the call:

BasicAWSCredentials awsCreds = new BasicAWSCredentials(accessKey, sharedsecret);
AmazonS3 amazonS3 = new AmazonS3Client(awsCreds);
amazonS3.setEndpoint(endPoint);

ObjectMetadata metadata = amazonS3.getObjectMetadata(bucketName, storageKey).clone();
metadata.setCacheControl("private, max-age=31536000");
CopyObjectRequest copyObjectRequest = new CopyObjectRequest(bucketName, storageKey, bucketName, storageKey).withNewObjectMetadata(metadata);
amazonS3.copyObject(copyObjectRequest);

Maybe the Cache-Control header on the PUT (Copy) request to the API is dropped somewhere on the way?

Patrick Suter
  • 275
  • 3
  • 12
  • Sorry for the late response. Swisscom asked EMC/Dell about this: "According to our knowledge this should work. In order to be able to analyze and reproduce ourselves we kindly request more info like piece of code and detailed error messages". Please post code snippets to reproduce the issue. Maybe do an other SO posting or edit the question with all required info. – Sybil Nov 02 '16 at 07:46
  • Thanks for providing the helpful piece of code and your investigations in this case. Is there any specific reason why you want to set cacheControl? In our mind it only make sense if using static webpages which we currently do not support." – Sybil Nov 11 '16 at 12:47
  • 1
    It is part of our efforts to optimize site performance. We save uploaded content (mainly images) to the s3 storage and re-use the generated presigned urls to retrieve data directly from the s3 (especially for completely public content) in order to minimize the data loaded when revisiting the site. – Patrick Suter Nov 14 '16 at 18:42
  • 1
    Since we re-use the presigned urls we want to be in control of the Cache-Control headers. – Patrick Suter Nov 14 '16 at 18:48

1 Answers1

2

According to the latest ATMOS Programmer's Guide, version 2.3.0, Table 11 and 12, there's nothing specified that COPY of objects are unsupported, or supported either.

I've been working with ATMOS for quite some time, and what I believe is that the S3 copy function is somehow internally translated to a sequence of commands using the ATMOS object versioning (page 76). So, they might translate the Amazon copy operation to "create a version", and then, "delete or truncate the old referenced object". Maybe I'm totally wrong (since I don't work for EMC :-)) and they handle that in a different way... but, that's how I see through reading the native ATMOS API's documentation.

What you could try to do: Use the native ATMOS API (which is a bit painful, yes, I know), and then, create a version of the original object (page 76), update the metadata of such version (User Metadata, page 12), and then restore the version to the top-level object (page 131). After that, check if the metadata will be properly returned in the S3 API.

That's my 2 cents. If you decide to try such solution, post it here if that worked.

gsmachado
  • 197
  • 8
  • 1
    Thanks a lot for your suggestion! Unfortunately it looks like i can't access the ATMOS API because i am already using the s3 API and the following restriction is documented: _Important: Swisscom does not support access to the same data with both APIs. There are different accounts and access points. There is no migration path from Atmos API to S3 API or vice versa._ The service keys reflect this, i can only see S3 API connection data. – Patrick Suter Oct 28 '16 at 10:31