I'm able to use Terraform to launch a VM, and if I shell to that VM from the GCP web UI, I can manually authenticate with:
gcloud auth application-default login
gcloud auth login
and then I can manually mount a GCP bucket to the VM:
gcsfuse bucket_name ~/mount_point/
But I can't do this automatically from the Terraform startup script. The authentication commands are, of course, very manual in nature, involving jumping over to a browser to grab keys, etc. And the gcsfuse command obviously fails without the auth, so putting that into the startup script doesn't work; it produces a permissions error, as expected.
I already have the various service and application accounts in GCP, and I've downloaded the associated json key files, but I don't understand how to auto-auth the new VM, from Terraform, so as to mount the bucket. I suspect this has something to do with google_service_account and google_service_account_key, but I haven't figured out how to do it yet. I just don't know how to shoehorn the json key files into the Terraform configuration so that the root user on the VM, running the startup script, can mount the bucket.
Someone else suggested this might have something to do with fstab too. I already added the following to the startup script:
echo 'bucket_name /mount/bucket_name gcsfuse rw,noauto,user' >> /fstab
And I can see from the shell that the echo command worked (fstab is correspondingly altered), but I still can't mount the bucket. I also tried mounting with mount instead of gcsfuse:
mount -allow_other -t gcsfuse -o rw,user bucket_name /mnt/bucket_name/
But that also doesn't work. mount doesn't produce an error, but the mounted directory remains empty nonetheless. Gah! What is the solution here?
Thanks.