10

We use Docker containers for most of our work, including development on our own machines. These are ephemeral (started each time we run a test, for example).

For AWS, the auth is easy - we have our keys in our environment, and those are passed through to the container.

We're starting to use Google Cloud services, and the auth path seems harder than AWS. When doing local development, gcloud auth login works well. But when working in an ephemeral container, the login process would be needed each time, and I haven't found a way of persisting user credentials using either a) environment variables or b) mapping volumes - which are the two ways of passing data to containers.

From what I can read, the only path is to use service accounts. But I think then everyone needs their own service account, and needs to be constantly updating that account's permissions to be aligned with their own.

Is there a better way?

Alexey Alexandrov
  • 2,951
  • 1
  • 24
  • 24
Maximilian
  • 7,512
  • 3
  • 50
  • 63
  • Can you use one service account to create a JSON key file, and pass that into the image during docker build while also setting the environment variable GOOGLE_APPLICATION_CREDENTIALS, and then everyone runs containers off that image? https://developers.google.com/identity/protocols/application-default-credentials#howtheywork – gunit May 05 '17 at 23:31
  • @Maximilian, did you get a good solution, please? For various reason, I couldn't use service account in my case, and would like to copy my own user's (@gmail.com) credential to the ephemeral container. – zyxue Jun 28 '18 at 16:32
  • @zyxue we use the volume solution successfully, similar to answer below. We use a named volume within a docker-compose file so that it's not dependent on a local path – Maximilian Jun 28 '18 at 16:48
  • 1
    OK, I see. Then my situation is a bit different, I meant to lauch a container in a new VM, not locally. Still thanks. – zyxue Jun 28 '18 at 17:00

2 Answers2

9

The easiest for making a local container see the gcloud credentials might be mapping the file system location of the application default credentials into the container.

First, do

gcloud auth application-default login

Then, run your container as

docker run -ti -v=$HOME/.config/gcloud:/root/.config/gcloud test

This should work. I tried it with a Dockerfile like

FROM node:4
RUN npm install --save @google-cloud/storage
ADD test.js .
CMD node ./test.js

and the test.js file like

var storage = require('@google-cloud/storage');
var gcs = storage({
    projectId: 'my-project-515',
});

var bucket = gcs.bucket('my-bucket');
bucket.getFiles(function(err, files) {
  if (err) {
    console.log("failed to get files: ", err)
  } else {
    for (var i in files) {
      console.log("file: ", files[i].name)
    }
  }
})

and it worked as expected.

Alexey Alexandrov
  • 2,951
  • 1
  • 24
  • 24
3

I had the same issue, but I was using docker-compose. This was solved with adding following to docker-compose.yml:

    volumes:
      - $HOME/.config/gcloud:/root/.config/gcloud
Vojtěch
  • 11,312
  • 31
  • 103
  • 173