2

This is really part 2 of a 2-part question. Part 1 was about a more graceful way to load secrets in Google Secret Manager during the middleware processing of ASP.Net Web API applications ...

What's the best (real world) way to load secrets into an ASP.Net Core Web API - that will be deployed to Google Run?

Part 2: Now that my Web API is deployed to Google Run, it doesn't have the credentials it needs to access secrets stored in Google Secret Manager.

Currently a new image is built whenever there is a commit on the test branch of my repo. That triggers a build process at Google Build, which uses this cloudbuild.yaml to do the build...

steps:
  # Docker Build
  - name: 'gcr.io/cloud-builders/docker'
    args: ['build', '-t', 
           'gcr.io/${PROJECT_ID}/tic-tac-toe-api', 
           './api']

  # Docker Push
  - name: 'gcr.io/cloud-builders/docker'
    args: ['push', 
           'gcr.io/${PROJECT_ID}/tic-tac-toe-api']

  # Entrypoint, timeout and environment variables
  - name: 'gcr.io/cloud-builders/gcloud'
    args: ['run', 'deploy', 'tic-tac-toe-api',
           '--image', 'gcr.io/${PROJECT_ID}/tic-tac-toe-api', '--region', 'us-central1']

images:
- gcr.io/$PROJECT_ID/tic-tac-toe-api

Once the image is built and deployed to Google Run it needs the GC CLI credentials to be loaded so that it can access the Secret Manager API (or so I assume).

But how to get them in there?

Storing my Google Cloud credentials in a file that gets committed to the repo, so that the cloudbuild.yaml can find it and incorporate in the build seems counter-productive.

I can't find a viable solution to this and at this point I can't tell if that is because there isn't one, or because deploying .Net Web API applications to GCP inside a Docker container is just complete madness. :p

Jason Glover
  • 618
  • 5
  • 13
  • Just to be on the same page, you don't have the credentials needed to access secrets stored in Secret Manager and you want to store/access your credentials to your image so that it can access your Secret Manager, am I correct? Also, do you encounter any errors or blockers when deploying your project? – Robert G Sep 07 '22 at 06:25
  • @RobertG - correct. When I deploy the code I get access denied errors trying to access GSM. I can tell from the stacktrace that the Google.Cloud.SecretManager.V1 package is present in the compiled build. But that library needs to be able to find Google Cloud credentials (or API keys) to be able to authorize. On my local dev end this code works fine because the client library can find MY Google Cloud credentials locally. But of course those aren’t deployed anywhere. – Jason Glover Sep 07 '22 at 17:30

1 Answers1

3

You can create an additional step in Google Cloud Build to generate credentials and store them in the file (f.e. ./google_credentials.json), before running the docker container:

###### previous Cloud Build Steps ###

- name: 'bash'
  args: ['./cloudbuild_credentials.sh'] ### <--- script to generate creds
  dir: 'src'                            ### <--- directory
  id: 'generate-credentials'
  env:
      - PRIVATE_KEY_ID=$_PRIVATE_KEY_ID. ### <--- keys might be passed to Cloud Build via Triggers
      - PRIVATE_KEY=$_PRIVATE_KEY

###### next Cloud Build Steps ###

Example how the script (cloudbuild_credentials.sh) might look like:

printf '{
  "type": "service_account",
  "project_id": "@define",
  "private_key_id": "%s",
  "private_key": "%s",
  "client_email": "@define",
  "client_id": "@define",
  "auth_uri": "@define",
  "token_uri": "@define",
  "auth_provider_x509_cert_url": "@define",
  "client_x509_cert_url": "@define"
}
' "$PRIVATE_KEY_ID" "${PRIVATE_KEY}" > ./google_credentials.json
ls .

This way you commit only non-sensitive data to the repo, and you pass the key from the outside. For example via Google Cloud Build Triggers.

star67
  • 1,505
  • 7
  • 16