1

I have several Google cloud projects with dedicated environments for prod, stag, dev, etc. All the Docker image are platform agnostic and completely controlled by environment variables. They should be used from all environments. What I have found is to create a project, deploy the docker images in the repository and then allow read access by using a registry reader to the consuming service accounts.

  1. What service account has to be added to the registry if GKE deployments should be able to pull the image?
  2. Is this the recommended approach? I feels a little bit strange that also a dedicated project is needed. Is there something like a "global" repository?
k_o_
  • 5,143
  • 1
  • 34
  • 43
  • sorry didn't get your second question properly could you please help with that ? – Harsh Manvar Aug 11 '23 at 09:22
  • 1
    @HarshManvar Thanks for your reply. I have considered a project as an isolated thing which does not interact with other projects. I was wondering if there is something like a global repository, but maybe I'm only biased by the naming and it is absolutely fine to run a project for the docker registry. – k_o_ Aug 11 '23 at 11:38
  • yes it's fine, but it's come with few pros & cons feel free to check updated answer once please. – Harsh Manvar Aug 11 '23 at 11:55

1 Answers1

1

What service account has to be added to the registry if GKE deployments should be able to pull the image?

Use the workload identity with GKE : https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity

You can create a new service account in IAM and assign it necessary role/permission to it.

That GCP serviceaccount will used the of K8s service account and GKE workloads will be able to pull the images from the Docker registry.

So in this case you don't need to manage the ImagePull secret, refresh it from time to time and you can manage Auth by GCP service account.

Update :

Yes you can run the project for Docker Registry specifically, it's more depends on your need.

Just to mentioned if by Registry you mean GCR make sure it will get deprecated with time so check out the Artifact registry.

There is pros & cons Keeping single Project specific to Docker registry

  • Pros : Simplify management & access control
  • Con : More difficult to isolate the Artifacts between projects.

If all artifacts are stored in one project and all others pulling from there.

If you have sensitive artifacts that you don't want to be accessible to all projects, you will need to create separate repositories for those artifacts.

Harsh Manvar
  • 27,020
  • 6
  • 48
  • 102
  • This approach seems to be more difficult compared to just give "Artifact Repository Reader". I had the assumption that already a service account is created automatically for a GKE cluster and I could use it? Is this permission to powerful and giving too much access? What are the "necessary roles and permissions" you are referring to? Also I assume since the docker images are in a different project, I would think that the permission has to be given in this project? – k_o_ Aug 11 '23 at 11:58
  • Giving `Artifacts Reader` is also fine you can do that, however, GKE default service account uses `compute Engine default service account`. Default permission is `Editor` i would suggest to harden security it is better to keep different SA for diff purpose with minimum permission/role attached to SA. For workload just keep `SA` with `Artifact Reader` and if any other your apps using. here you can read about necessary roles for GKE : https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-cluster#use_least_privilege_sa Yes right if going that way you have to think across proj. – Harsh Manvar Aug 11 '23 at 12:07