0

I try to run the following code but index.js turns out to be corrupted.

Any idea why?

gzip dist/production/index.js
mv dist/production/index.js.gz dist/production/index.js

s3cmd --access_key="$S3_ACCESS_KEY" --secret_key="$S3_SECRET_KEY" \
      --acl-public --no-mime-magic --progress --recursive         \
      --exclude "dist/production/index.js" \
      put dist/production/ 
      "s3://${BUCKET}/something/${BUILD_IDENTIFIER}/production/" &

s3cmd --access_key="$S3_ACCESS_KEY" --secret_key="$S3_SECRET_KEY" \
      --acl-public --no-mime-magic --progress --recursive         \
      --add-header="Content-Encoding:gzip" \
      put dist/production/index.js 
      "s3://${BUCKET}/something/${BUILD_IDENTIFIER}/production/" &

wait

Notice the & in the end of the two commands that makes two uploads to the same location in parallel.

Edit:

It works fine without parallalizing the process and making them running in the background. I wanted to make the process faster so i upload the heavy gzipped index.js while the other files are uploaded.

Edit2:

What I get in the index.js that is uploaded is gibberish content like this:

��;mS�H���W �7�i"���k��8̪

Edit3:

Looks like the problem was with how I used exclude. It excludes relatively to the uploaded folder and not to the working directory.

--exclude "dist/production/index.js" ==> --exclude index.js

fixed it.

Vitali Zaidman
  • 899
  • 1
  • 9
  • 24
  • Does this do what you want if you don't background the two invocations? – jarmod Mar 20 '18 at 15:53
  • yes. it works fine. i try to make it faster but parallelizing them. edited my question with this info. – Vitali Zaidman Mar 20 '18 at 16:00
  • I don't use s3cmd but it seems that one reason people strive to run it in parallel is because it's slow. The solution is often simply to not use s3cmd, but to use awscli or s3-cli. Also related: https://stackoverflow.com/questions/26934506/uploading-files-to-s3-using-s3cmd-in-parallel – jarmod Mar 20 '18 at 16:24
  • You are inviting trouble if you name a gzipped file with a `.js` extension - please don't do that. – Mark Setchell Mar 20 '18 at 17:51
  • @MarkSetchell if the object is uploaded with `Content-Encoding: gzip`, then this is correct. The browser will see the encoding header and transparently decompress the object, and handle it correctly if the `Content-Type` is set to the to right MIME type. The extension should reflect the type, not the transfer encoding. S3 doesn't do content negotiation, so this is standard practice. – Michael - sqlbot Mar 21 '18 at 01:52
  • 1
    @VitalikZaidman it looks like `--exclude "dist/production/index.js"` isn't doing what you expect. Do the two commands in the reverse order, not in parallel, and you should find the same "corrupt" behavior. Perhaps the correct argument is `--exclude index.js`, since the upload is rooted in `./dist/production/`. – Michael - sqlbot Mar 21 '18 at 01:55
  • @Michael-sqlbot you saved my life. It was indeed the problem. Thanks!!! – Vitali Zaidman Mar 21 '18 at 10:16
  • @Michael-sqlbot, please write it as a comment to the original question so i can accept it – Vitali Zaidman Mar 24 '18 at 11:02
  • @VitalikZaidman thank you for the reminder. For some reason, I was thinking I already had. – Michael - sqlbot Mar 24 '18 at 19:30

1 Answers1

0

Is your problem is not with the line?

cp dist/production/index.js.gz dist/production/index.js

You are copying gzipped file, not the plain index.js text file.

Hope it helps.

EDIT1:

If you are doing it on purpose why not maintain the gz extension. Extensions does lot of things when you handle with browser.

cp dist/production/index.js.gz dist/production/index.js.gz

If you use plain s3 to download and verify the hash, they should be the same file. I did verify it.

Kannaiyan
  • 12,554
  • 3
  • 44
  • 83