3

S3 + Cloudfromt is not serving .gz /.br static file when client request header contains Accept-Encoding: gxip, deflate, br.

  1. Compressed file at build time and s3 folder contains index.html, index.html.gz and index.html.br
  2. Added Accept encoding in whitelist header of cloudfront.
  3. Added Content-Length in S3 CORS configuration
  4. Added Content Encoding for index.html.gz as gzip and index.html.br as br with Content-Type as text/html
  5. Disabled Automatic compression in cloufront

But i am not getting compressed files from S3+ cloudfront. I am able to access index.html.gz directly. but cloudfront+S3 not able to serve the file automatically. Am i missing something? Or is it not possible to serve like this?

rinesh
  • 327
  • 1
  • 3
  • 9
  • Step 4, rename index.html.gz as index.html ... Upload this renamed file to your s3 ... Then put the content encoding as gzip against that file – Akber Iqbal May 26 '19 at 04:00
  • 1
    No, You request for index.html and Cloudfront makes the request for index.html and S3 is going to serve you that, irrespective of accept-encoding header in the request, you can't have the compressed file served based on the extension, the compressed file still need to be index.html but S3 contains a metadata content-encoding: gzip so browser can understand it's zip, it doesn't understand by file extension. – James Dean May 26 '19 at 04:10
  • How will i serve brotli file? – rinesh May 26 '19 at 05:10
  • 1
    Same az zip, have a br compress file with index.html for example and set the content-encoding: br so browser can understand it's compressed. – James Dean May 26 '19 at 06:56
  • So it is not possible to have both gzip and br file and serve based on the header... In that case we need to go for lambda@edge... Is it correct – rinesh May 26 '19 at 07:08
  • 1
    Yes, S3 doesn't work like a typical server where it looks at the accept-encoding header and gives that file, you need to implement logic such lambda@edge to do that work, just to keep in mind CloudFront removes the accept-encoding header if its not gzip, if you're planning to write a origin request lambda@edge, you need to whitelist the header. – James Dean Jun 01 '19 at 18:24
  • 2
    Hi @rinesh did you find solution for this? Are you able to serve both brotli and gzip files according to accept-encoding header? – Always_a_learner Aug 25 '20 at 12:49

1 Answers1

2

This can be done with CloudFront -> Lambda@Edge (Origin request) -> S3. Since this question was asked, AWS added Accept-Encoding header to be passed to S3, so the Lambda function can use it.

The lambda will take the accept-encoding header, check if brotli is in it, and if so, it will add the needed extension to the request that goes to S3 bucket. The clients can still go to the same URL but will get different results based on that accept-encoding header.

Also, make sure that your CloudFront Cache Policy is based on the accept-encoding header.

Example code for Lambda:

'use strict';

/**
 * Funciton registered on 'Origin Request' CloudFront Event
 */
exports.handler = (event, context, callback) => {
  const request = event.Records[0].cf.request;
  const headers = request.headers;
  var isBr = false;
  
  
  if(headers['accept-encoding'] && headers['accept-encoding'][0].value.indexOf('br') > -1) {
    isBr = true;
  }
  const gzipPath = '.gz';
  const brPath = '.br';

  /**
  * Update request path based on custom header
  */  
  request.uri = request.uri + (isBr ? brPath : gzipPath);
  callback(null, request);
};