3

We have enabled automatic compression on CloudFront and it works great. It looks like it is using level 5 brotli compression (for performance reasons probably) and sometimes, with big files, we would like to compress it locally with max compression, which is 20x slower, and upload it next to CSS file to make it even smaller.

On default TailwindCSS file the difference is pretty big:

   2.8M Nov 17 12:03 test.css
   152K Nov 17 12:04 test-level-5.css.br
    71K Nov 17 12:04 test-level-11.css.br

When I add test.css.br file and put it next to test.css on S3, then invalidate file on CF, it still uses the dynamically compressed file. Is it possible for CF to respect the file i upload if it is present? I would like to avoid writing Lambda @ Edge to do this.

Pavelloz
  • 551
  • 5
  • 12

1 Answers1

1

I'm facing the same issue.

After some searching and tests, I found a solution.

Firstly your outputs should be BR / GZip compressed files, without extra suffix (i.e. no xxx.js.br or xxx.js.gzip, just xxx.js is alright, css and other files are similar).

Secondly when using aws command to upload the output files, add a Content-Encoding header to specify the compression type manually.

s3cmd --access_key=$AK --secret_key=$SK --region=$RG --host=$HOST \
  -m application/javascript \
  --add-header=Content-Encoding:br \
  --add-header=Content-Type:application/javascript \
  put $YOUR.js $TARGET_BUKKET
Ouyang Chao
  • 129
  • 3
  • That sounds like a solution for those who do not need the uncompressed javascript being available (most people). Nice work! – Pavelloz May 26 '23 at 11:03