1

Attempt with aws-sdk v3, using my account's Master Application Key:

import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3";
import axios from "axios";

const region = process.env.BUCKET_REGION;
const bucket = process.env.BUCKET_NAME;

const client = new S3Client({
  region,
  // endpoint: "https://s3.us-east-005.backblazeb2.com",
});
const expiresIn = 7 * 24 * 60 * 60; // 3600
const command = new PutObjectCommand({ Bucket: bucket, Key: filename });
const signedUrl = await getSignedUrl(client, command, { expiresIn });

await axios.put(signedUrl, "hello");

This is wrong because it generates a presigned url like <BUCKET_NAME>.s3.us-east-005.amazonaws.com instead of <BUCKET_NAME>.s3.us-east-005.backblazeb2.com. Also my understanding is that aws sdk v3 uses v4 signature by default and also I did not even see an option to explicitly set v4 signature https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingAWSSDK.html.

Attempt with backblaze native apis:

import B2 from "backblaze-b2";
import axios from "axios";

const b2 = new B2({
    applicationKey: process.env.APPLICATION_KEY!,
    applicationKeyId: process.env.APPLICATION_KEY_ID!,
});
await b2.authorize();

const bucketName = process.env.BUCKET_NAME!;
const bucketApi = await b2.getBucket({ bucketName });
const bucket = bucketApi.data.buckets.find(
    (b) => b.bucketName === bucketName
);

const signedUrlApi = await b2.getUploadUrl({ bucketId: bucket.bucketId });

await axios.put(signedUrlApi.data.uploadUrl, "testing123", {
    headers: {
      Authorization: signedUrlApi.data.authorizationToken,
    },
});

This fails with 405 error. Please help as I have not seen any docs on how to properly generate a presigned url for uploading files into a backblaze bucket from the client.

metadaddy
  • 4,234
  • 1
  • 22
  • 46
55 Cancri
  • 1,075
  • 11
  • 23

1 Answers1

2

S3 API

Note that, as mentioned in the B2 docs, you can't use your account's Master Application Key with the S3 API. So, first, you'll need to create a 'regular' application key for use with your app.

This works for me with a private bucket and a 'regular' application key:

import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3";
import axios from "axios";

const endpoint = process.env.BUCKET_ENDPOINT; // "https://s3.us-east-005.backblazeb2.com"
const region = process.env.BUCKET_REGION; // "us-east-005"
const bucket = process.env.BUCKET_NAME;

const client = new S3Client({
  region,
  endpoint,
});
const expiresIn = 7 * 24 * 60 * 60; // 3600
const command = new PutObjectCommand({ Bucket: bucket, Key: filename });
const signedUrl = await getSignedUrl(client, command, { expiresIn });

await axios.put(signedUrl, "hello");

The only difference from your code is that I added endpoint to the configuration for the S3Client constructor, which you had commented out.

The AWS SDK for JavaScript v3 does indeed default to version 4 signatures, so you don't need to specify the signature version.

Here's an example of (an expired) signed URL created by the above code:

https://metadaddy-private.s3.us-west-004.backblazeb2.com/hello.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=00415f935cf4dcb000000003c%2F20230327%2Fus-west-004%2Fs3%2Faws4_request&X-Amz-Date=20230327T231819Z&X-Amz-Expires=60&X-Amz-Signature=eed21bde4ee375d07e1b26c47512904a4972ab13d41bd1c81c16e48feec41dcc&X-Amz-SignedHeaders=host&x-id=PutObject

Inserting line breaks to see the components more easily:

https://metadaddy-private.s3.us-west-004.backblazeb2.com/hello.txt
?X-Amz-Algorithm=AWS4-HMAC-SHA256
&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD
&X-Amz-Credential=00415f935cf4dcb000000003c%2F20230327%2Fus-west-004%2Fs3%2Faws4_request
&X-Amz-Date=20230327T231819Z
&X-Amz-Expires=60
&X-Amz-Signature=eed21bde4ee375d07e1b26c47512904a4972ab13d41bd1c81c16e48feec41dcc
&X-Amz-SignedHeaders=host
&x-id=PutObject

The X-Amz-Content-Sha256 header is set to UNSIGNED-PAYLOAD because, typically, when you generate a presigned upload URL, you don't know what the content will be, so you can't compute the SHA-256 digest.

If your upload fails in the browser, you can narrow down the cause by testing the URL from the command line with curl:

curl -i -X PUT --data-binary 'Hello' --header 'Content-Type: text/plain' 'https://...your presigned url...'

If this fails, you will see the the HTTP status code, as well as more detail in the body of the response. For example, testing with a bad access key id:

HTTP/1.1 403 
x-amz-request-id: 46f5a7ff3a48b46a
x-amz-id-2: addJuXWt5bqtv2ndrbnY=
Cache-Control: max-age=0, no-cache, no-store
Content-Type: application/xml
Content-Length: 156
Date: Mon, 27 Mar 2023 23:57:55 GMT

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
    <Code>InvalidAccessKeyId</Code>
    <Message>Malformed Access Key Id</Message>
</Error>

If you can PUT a file from the command line with curl and your presigned URL, but not in the browser, it is likely CORS that is preventing the upload. Check the browser developer console for details.

B2 Native API

The B2 Native API cannot generate and use presigned URLs in the way that the S3 API can. Your code is simply uploading a file.

An HTTP 405 error means 'Method Not Allowed'. You can't PUT a file to the upload URL; you need to use POST (this is mentioned in the b2_upload_file docs). You also need a couple more headers:

  • X-Bz-File-Name - the filename
  • X-Bz-Content-Sha1 - a SHA-1 digest of the body

This should work for you:

import B2 from "backblaze-b2";
import axios from "axios";
import crypto from "crypto";

const b2 = new B2({
    applicationKey: process.env.APPLICATION_KEY!,
    applicationKeyId: process.env.APPLICATION_KEY_ID!,
});
await b2.authorize();

const bucketName = process.env.BUCKET_NAME!;
const bucketApi = await b2.getBucket({ bucketName });
const bucket = bucketApi.data.buckets.find(
    (b) => b.bucketName === bucketName
);

const signedUrlApi = await b2.getUploadUrl({ bucketId: bucket.bucketId });

const body = "testing123";

const sha1Hash = 
    crypto.createHash('sha1')
    .update(body)
    .digest('hex');

await axios.post(signedUrlApi.data.uploadUrl, body, {
    headers: {
      Authorization: signedUrlApi.data.authorizationToken,
      "X-Bz-File-Name": filename,
      "X-Bz-Content-Sha1": sha1Hash,
    },
});

Strictly speaking, you can skip the SHA-1 digest, and pass do_not_verify in the X-Bz-Content-Sha1 header, but this is strongly discouraged, as it removes integrity protection on the uploaded data.

Backblaze B2 calculates its own SHA-1 digest of the data it receives and compares it to the digest you supply in the header. If some error were to corrupt the body in transit, the digests would not match, and B2 would reject the request with HTTP status 400 and the following response:

{
  "code": "bad_request",
  "message": "Checksum did not match data received",
  "status": 400
}

If, on the other hand, X-Bz-Content-Sha1 was set to do_not_verify, then B2 wouldn't be able to perform this check, and would unwittingly store the corrupted data.

Note

If you do need a presigned URL, then you must use the AWS SDK to generate one, as shown above. As I mentioned, the B2 Native API does not include a way of generating and using presigned URLs.

If you don't need a presigned URL, and you simply need to upload data to an object, as you are doing in your B2 Native API code, you can do so more simply using the AWS SDK:

import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3";
import axios from "axios";

const endpoint = process.env.BUCKET_ENDPOINT; // "https://s3.us-east-005.backblazeb2.com"
const region = process.env.BUCKET_REGION; // "us-east-005"
const bucket = process.env.BUCKET_NAME;

const client = new S3Client({
  region,
  endpoint,
});
const command = new PutObjectCommand({ 
  Bucket: bucket, 
  Key: filename, 
  Body: "hello",
});
await client.send(command);

BTW, we (Backblaze - I'm Chief Technical Evangelist there) advise developers to use the AWS SDKs and the Backblaze S3 Compatible API unless there's a particular reason to use the B2 Native API - for example, manipulating application keys.

metadaddy
  • 4,234
  • 1
  • 22
  • 46
  • Thank you for your response. I commented that line because it still didnt work even with endpoint defined. I still get 403. Were you uploading to a private or public bucket? How did your generated signed s3 token look? This is mine: https://.s3..backblazeb2.com/hello.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=%2F%2Fs3%2Faws4_request&X-Amz-Date=20230327T230321Z&X-Amz-Expires=604800&X-Amz-Security-Token=&X-Amz-Signature=&X-Amz-SignedHeaders=host&x-id=PutObject – 55 Cancri Mar 27 '23 at 23:11
  • Notice the Sha256 says UNSIGNED-PAYLOAD. Additionally, did you do any special cors config to your bucket? I used: b2 authorize-account && b2 update-bucket --corsRules '[ { "corsRuleName": "downloadFromAnyOriginWithUpload", "allowedOrigins": [ "*" ], "allowedOperations": [ "s3_delete", "s3_get", "s3_head", "s3_post", "s3_put" ], "maxAgeSeconds": 3600 } ]' but still I get the 403 with the presigned url. Do you see anything wrong or missing on my end? – 55 Cancri Mar 27 '23 at 23:13
  • I agree, I will also prefer to upload using aws sdk that way I don't have to first send the file from the client to the server to be uploaded by b2 apis, i can instead just send a presigned url from the server and have the client upload the file directly. One final thing I find odd is your final aws sdk example. Why do you include a body in your PutObjectCommand? My understanding is only the Bucket and Key were needed to generate the signedUrl, and then the client would axios put to this signed url with the file buffer as the body. Am I mistaken here? – 55 Cancri Mar 27 '23 at 23:20
  • 1
    Hi @55Cancri - I added some more detail to my answer. I'm using a private bucket, and I pasted in an example presigned URL that I generated, and an explanation of `UNSIGNED-PAYLOAD`. – metadaddy Mar 28 '23 at 00:07
  • 1
    In the CORS rule in your comment, you don't specify a value for `allowedHeaders`, which means that no headers will be allowed in pre-flight OPTIONS's requests. Try setting that to `'*'`, just like `allowedOrigins`. Finally, I included that last AWS SDK example because I wasn't sure whether you were trying to just upload some data. You are correct - you only need the bucket and key to generate the presigned URL. – metadaddy Mar 28 '23 at 00:11
  • 1
    Looking at the CORS rules generated by the Web UI, it's probably the `Authorization` header that is causing the problem. The Web UI creates a download CORS rule with `"allowedHeaders": [ "authorization", "range" ]` - you could use that instead of `"allowedHeaders": [ "*" ]`. – metadaddy Mar 28 '23 at 00:23
  • 1
    Thank you so much @metadaddy. The curl tip helped me figure out that my access key id was invalid. I was using the master key keyid and application key but it seems I had to instead create a new application key and use those. It's also possible the cors config was wrong though I haven't tested without the allowHeaders yet. I've been stuck on this for a few days now so your help is tremendously appreciated – 55 Cancri Mar 28 '23 at 00:41
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/252824/discussion-between-metadaddy-and-55-cancri). – metadaddy Mar 28 '23 at 01:31