4

I am attempting to set permissions on individual objects in a Google Cloud Storage bucket to make them publicly viewable, following the steps indicated in Google's documentation. When I try to make these requests using our application service account, it fails with HTTP status 403 and the following message:

Access denied. Provided scope(s) are not authorized.

Other requests work fine. When I try to do the same thing but by providing a token for my personal account, the PUT request to the object's ACL works... about 50% of the time (the rest of the time it is a 503 error, which may or may not be related).

Changing the IAM policy for the service account to match mine - it normally has Storage Admin and some other incidental roles - doesn't help, even if I give it the overall Owner IAM role, which is what I have.

Using neither the XML API nor the JSON version makes a difference. That the request sometimes works with my personal credentials indicates to me that the request is not incorrectly formed, but there must be something else I've thus far overlooked. Any ideas?

Maxim
  • 4,075
  • 1
  • 14
  • 23
Evan
  • 174
  • 2
  • 10
  • Are you using the uniform bucket-level access policy on that bucket? – Oliver Aragon Jan 10 '20 at 17:26
  • Do you run it in the same environment for both user and service account credentials? Otherwise, it could be something like GCE instance with insufficient scopes – Guillem Xercavins Jan 12 '20 at 11:41
  • Have you checked [this reference](https://cloud.google.com/storage/docs/authentication#oauth-scopes) about GCS scopes? – manasouza Jan 13 '20 at 11:27
  • @OliverAragon No, and we can't just make a public bucket because not all files in the bucket are meant to be public. – Evan Jan 13 '20 at 17:36
  • @GuillemXercavins Yes, I think so. – Evan Jan 13 '20 at 17:41
  • Scopes do not override roles - they limit authorization. Your problem is that the service account does not have the required roles. You cannot grant permission to a request via scopes for which the roles have not been assigned to the identity. Edit your question with details on the service account , the roles and how you are creating the Access Token. – John Hanley Jan 15 '20 at 04:05
  • Additional comment. The 503 error when using a User Credential OAuth Access Token might be caused by API rate limiting. Do not use user credentials for software that makes lots of API calls. – John Hanley Feb 09 '20 at 23:17

5 Answers5

2

Check for the scope of the service account incase you are using the default compute engine service account. By default the scope is restricted and for GCS it is read only. Use rm -r ~/.gsutil to clear cache in case of clearing cache

Ambesh
  • 111
  • 1
  • 8
  • 2
    Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-ask). – Community Sep 22 '21 at 06:41
1

Follow the documentation you provided, taking into account these points:

  • Access Control system for the bucket has to be Fine-grained (not uniform).

  • In order to make objects publicly available, make sure the bucket does not have the public access prevention enabled. Check this link for further information.

  • Grant the service account with the appropriate permissions in the bucket. The Storage Legacy Object Owner role (roles/storage.legacyObjectOwner) is needed to edit objects ACLs as indicated here. This role can be granted for individual buckets, not for projects.

  • Create the json file as indicated in the documentation.

  • Use gcloud auth application-default print-access-token to get authorization access token and use it in the API call. The API call should look like:

curl -X POST --data-binary @JSON_FILE_NAME.json \
  -H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
  -H "Content-Type: application/json" \
  "https://storage.googleapis.com/storage/v1/b/BUCKET_NAME/o/OBJECT_NAME/acl"
JMA
  • 803
  • 4
  • 9
1

You need to add OAuth scope: cloud-platform when you create the instance. Look: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create#--scopes

Either select "Allow full access to all Cloud APIs" or select the fine-grained approach

Kevin Danikowski
  • 4,620
  • 6
  • 41
  • 75
1

When trying to access GCS from a GCE instance and getting this error message ...
the default scope is devstorage.read_only, which prevents all write operations.

Not sure if scope https://www.googleapis.com/auth/cloud-platform is required, when scope https://www.googleapis.com/auth/devstorage.read_only is given by default (eg. to read startup scripts). The scope should rather be: https://www.googleapis.com/auth/devstorage.read_write.


And one can use gcloud beta compute instances set-scopes to edit the scopes of an instance:

gcloud beta compute instances set-scopes $INSTANCE_NAME \
  --project=$PROJECT_ID \
  --zone=$COMPUTE_ZONE \
  --scopes=https://www.googleapis.com/auth/devstorage.read_write \
  --service-account=$SERVICE_ACCOUNT

One can also pass all known alias names for scopes, eg: --scopes=cloud-platform. The command must be run outside of the instance, because of permissions - and the instance must be shutdown, in order to change the service account.

Martin Zeitler
  • 1
  • 19
  • 155
  • 216
  • As a note, this also solved my problems with writing to a bucket from a compute engine machine (and THANK YOU, for some reason I was not finding this in the documentation using the terms I was thinking in). – octern Feb 19 '22 at 16:42
  • I'm accepting this as correct because it didn't quite solve my problem (see below), but it's very close - the gcloud CLI SDK, like the Python SDK, requires that the running program have an active auth token, and the auth token itself has scopes which are a restricted subset of the service account's. In any case, to make a URL public what you need is the `devstorage.full_control` scope. – Evan Mar 28 '22 at 14:06
0

So, years later, it turns out the problem is that "scope" is being used by the Google Cloud API to refer to two subtly different things. One is the available permission scopes available to the service account, which is what I (and most of the other people who answered the question) kept focusing on, but the problem turned out to be something else. The Python class google.auth.credentials.Credentials, used by various Google Cloud client classes to authenticate, also has permission scopes used for OAuth. You see where this is going - the client I was using was being created with a default OAuth scope of 'https://www.googleapis.com/auth/devstorage.read_write', but to make something public requires the scope 'https://www.googleapis.com/auth/devstorage.full_control'. Adding this scope to the OAuth credential request means the setting public permissions on objects works.

Evan
  • 174
  • 2
  • 10