0

I seem to once again have some sort of authority problem.

I need to move the entire contents of a bucket into a new bucket (the existing one has "test" in its name, and we could be using it for production very soon). I understand that is normally done via "transfer."

I can create a bucket, but I can't access the "transfer" function. If I try to go to the transfer page, I get: You don't have permissions to perform the action on the selected resource.

My research tells me I need the "editor" or "owner" role, but I can't even tell what project-level role(s) I have. And lacking one of those roles, how is it that I can create a bucket?

2 Answers2

0

The Storage Transfer Service is a good option for moving or copying very large buckets. Keep in mind, though, that if the bucket is fairly small (maybe a dozen gigabytes or less), it might be easier to just copy things from the command line, like so: gsutil -m cp -r gs://sourceBucket/** gs://destBucket.

Assuming you do want to use Storage Transfer Service, you will need to be a member (specifically, an owner or an editor) of the project that creates the job. This doesn't necessarily need to be the project that owns the source or the destination bucket. If you're not, you'll need someone to grant you that permission ( see "Manage project members" for more on how this is done). Restated here:

  1. Find someone who owns the project.
  2. Have them go to the Cloud Platform Console.
  3. Open the console left side menu and select IAM & Admin.
  4. From the project list, choose the project that you want to add a member to.
  5. Click Add Member and provide an email address.
  6. Select a role. Project owner or editor.
  7. Click "Add".
  8. If you've chosen owner, go check your email to see your invitation to become a new owner. Click the link to accept the role.

Once you are an owner or an editor of your project, head to the storage transfer console to configure and start your transfer.

Brandon Yarbrough
  • 37,021
  • 23
  • 116
  • 145
  • Thanks. Actually, more like `gsutil cp -r gs://source/* gs://target` because the directory tree needs to be preserved. But with that one change, the result certainly looks right. – hbquikcomjamesl Jul 27 '18 at 22:49
  • Good point, I'll fix it. Also `gsutil -m` would be a good idea if it's a huge bucket. That'll almost certainly speed things up. – Brandon Yarbrough Jul 27 '18 at 23:18
  • I can see the contents of the new bucket from the Storage Browser, but not from a Compute instance in which I've mounted the new bucket! (Starting a new thread about it.) – hbquikcomjamesl Jul 31 '18 at 00:13
0

"gsutil cp" ultimately didn't work: even with a "-p" option added, to preserve the ACLs, the contents of the bucket were completely invisible to the Google Compute instances that needed to access the bucket.

I finally ended up wiping the new bucket clean, mounting both buckets in an instance, and doing a "cp -r -P -p" from one bucket to the other, from the instance command line. THAT worked.