2

What is the standard approach to getting GCP credentials into k8s pods when using skaffold for local development?

When I previously used docker compose and aws it was easy to volume mount the ~/.aws folder to the container and everything just worked. Is there an equivalent solution for skaffold and gcp?

Marty Young
  • 167
  • 10
  • As far as I understand you want to be able to use **cloud sdk** or comunicate in any other way with **GCP API** from locally deployed `Pods` by **skaffold**, right ? – mario Apr 01 '20 at 23:55
  • Correct, I want the GCP API code that I run locally to have the same permissions on a local skaffold-deployed pod. What I am doing right now is putting a service account file in the root of the project and copying it in the Dockerfile, which means manual steps for other developers which isn't ideal. – Marty Young Apr 02 '20 at 10:45
  • Hey, were you able to find an answer to that? – MWZ Jul 03 '20 at 13:27
  • The best I could come up with was to create a service account JSON and copy it over in the dockerfile, setting the GOOGLE_APPLICATION_CREDENTIALS env var to point to it. – Marty Young Jul 04 '20 at 17:43

2 Answers2

2

When I previously used docker compose and aws it was easy to volume mount the ~/.aws folder to the container and everything just worked. Is there an equivalent solution for skaffold and gcp?

You didn't mention what kind of kubernetes cluster you have deployed locally but if you use Minikube it can be actually achieved in a very similar way.

Supposed you have already initialized your Cloud SDK locally by running:

gcloud auth login
gcloud container clusters get-credentials <cluster name> --zone <zone> --project <project name>
gcloud config set project <project name>

so you can run your gcloud commands on the local machine on which Minikube is installed. You can easily delagate this access to your Pods created either by Skakffold or manually on Minikube.

You just need to start your Minikube as follows:

minikube start --mount=true --mount-string="$HOME/.config/gcloud/:/home/docker/.config/gcloud/"

To make things simple I'm mounting local Cloud SDK config directory into Minikube host using as a mount point /home/docker/.config/gcloud/.

Once it is available on Minikube host VM, it can be easily mounted into any Pod. We can use one of the Cloud SDK docker images available here or any other image that comes with Cloud SDK preinstalled.

Sample Pod to test this out may look like the one below:

apiVersion: v1
kind: Pod 
metadata:
  name: cloud-sdk-pod
spec:
  containers:
  - image: google/cloud-sdk:alpine
    command: ['sh', '-c', 'sleep 3600']
    name: cloud-sdk-container
    volumeMounts:
    - mountPath: /root/.config/gcloud
      name: gcloud-volume
  volumes:
  - name: gcloud-volume
    hostPath:
      # directory location on host
      path: /home/docker/.config/gcloud
      # this field is optional
      type: Directory

After connecting to the Pod by running:

kubectl exec -ti cloud-sdk-pod -- /bin/bash

we'll be able to execute any gcloud commands as we are able to execute them on our local machine.

mario
  • 9,858
  • 1
  • 26
  • 42
2

If you're using Skaffold with a Minikube cluster then you can use their gcp-auth addon which does exactly this, described here in detail.

Gsquare
  • 689
  • 1
  • 5
  • 18