1

Setting up authentication for Docker  |  Artifact Registry Documentation suggests that gcloud is more secure than using a JSON file with credentials. I disagree. In fact I'll argue the exact opposite is true. What am I misunderstanding?

Setting up authentication for Docker | Artifact Registry Documentation says:

gcloud as credential helper (Recommended)

Configure your Artifact Registry credentials for use with Docker directly in gcloud. Use this method when possible for secure, short-lived access to your project resources. This option only supports Docker versions 18.03 or above.

followed by:

JSON key file

A user-managed key-pair that you can use as a credential for a service account. Because the credential is long-lived, it is the least secure option of all the available authentication methods

The JSON key file contains a private key and other goodies giving a hacker long-lived access. The keys to the kingdom. But only to the Artifact Repository in this instance, because the service account that the JSON file is for only has specifically those rights.

Now gcloud has two auth options:

  1. gcloud auth activate-service-account ACCOUNT --key-file=KEYFILE
  2. gcloud auth login

Lets start with gcloud and a service account: Here it stores KEYFILE in unencrypted in ~/.config/gcloud/credentials.db. Using the JSON file directly boils down docker login -u _json_key --password-stdin https://some.server < KEYFILE which stores the KEYFILE contents in ~/.docker/config.json. So using gcloud with a service account or just using the JSON file directly should be equivalent, security wise. They both store the same KEYFILE unencrypted in a file.

gcloud auth login requires login with a browser where I give consent to giving gcloud access to my user account in its entirety. It is not limited to the Artifact Repository like the service account is. Looking with sqlite3 ~/.config/gcloud/credentials.db .dump I can see that it stores an access_token but also a refresh_token. If the hacker has access to ~/.config/gcloud/credentials.db with access and refresh tokens, doesn't he own the system just as much as if he had access to the JSON file? Actually, this is worse because my user account is not limited to just accessing the Artifact Registry - now the user has access to everything my user has access to.

So all in all: gcloud auth login is at best security-wise equivalent to using the JSON file. But because the access is not limited to the Artifact Registry, it is in fact worse.

Do you disagree?

Peter V. Mørch
  • 13,830
  • 8
  • 69
  • 103
  • 1/2) One item that you have forgotten to consider. Create another login (Google Identity) with only the desired permissions. Your other points are valid. However, your security to the keys on your desktop is only as secure as your desktop. If someone breaks into your desktop, you have big problems. Therefore your concern about unencrypted keys is valid but needs to be balanced. – John Hanley Oct 21 '20 at 05:08
  • 2/2) On my website, I wrote an article about impersonation which removes the requirement to store the JSON key material. This might give you some details and ideas. https://www.jhanley.com/google-cloud-improving-security-with-impersonation/ – John Hanley Oct 21 '20 at 05:08
  • Funny, @JohnHanley, this is the third issue today you and I have been discussing: [How to move Google Cloud DNS entries between 2 projects?](https://stackoverflow.com/questions/58824367/how-to-move-google-cloud-dns-entries-between-2-projects/58825735#comment113960357_58825735), [google cloud platform - Where are gcloud credentials stored - Super User](https://superuser.com/questions/1506674/where-are-gcloud-credentials-stored/1508016#1508016) and this issue. Thanks for your activity, John! – Peter V. Mørch Oct 21 '20 at 05:47
  • @JohnHanley: About "your security to the keys on your desktop is only as secure as your desktop" - that is true for both acces+refresh tokens and for JSON files, so it doesn't really affect this discussion, right? About "Create another login (Google Identity) with only the desired permissions": So that would require all the developers in our team to create separate Google Identities for all the combinations of permissions they want to give applications. That doesn't scale very well.. And, it would just leave us in exactly the same place as just using the JSON file in the first place. – Peter V. Mørch Oct 21 '20 at 05:50
  • @JohnHanley I'll take a look at impersonation – Peter V. Mørch Oct 21 '20 at 05:55
  • One additional item. You can rotate service account keys. You can then change long-lived service accounts into short-lived ones. At work, we rotate all credentials every 90 days. This can be a pain ... Good security takes effort. For every secure scheme I think of, someone else will find a weakness. – John Hanley Oct 21 '20 at 06:52
  • Another item. Your scenario is a developer's setup. In production, you would not have the CLI or SDK installed. You will be using either Google OAuth or a Service Account. The SDK database files with credentials would not exist. – John Hanley Oct 21 '20 at 16:23
  • As mentioned by John Hanley in the previous comment, if you create a separate IAM service account and create a VM that use it, that VM can access cloud resources permitted for the account without using additional credentials inside the VM, that can be compromised. From my POV gcloud is more useful as a management tool than as a tool to help microservices get access to cloud resources on a regular basis. – VAS Oct 22 '20 at 06:03
  • Thanks @VAS & JohnHanley: "Your scenario is a developer's setup. In production, you would not have the CLI or SDK installed" This is assuming that the machines running docker and the containers are running in GCP which is not the case for us for various reasons. These are standard on-premise virtual machines so they need credentials for the Artifact Registry in GCP one way or another or they won't be able to pull the images to run. We're planning on running: `docker login -u _json_key --password-stdin https://europe-west4-docker.pkg.dev < keyfile-readonly` on production machines... – Peter V. Mørch Oct 22 '20 at 11:15
  • Would you consider to use some kind of artifactory as a registry proxy with unrestricted access for local clients that can authenticate on external registry using stored keys/credentials? Here is the example of such approach: https://blog.sonatype.com/using-nexus-3-as-your-repository-part-3-docker-images https://blog.sonatype.com/nexus-as-a-container-registry – VAS Oct 22 '20 at 11:22
  • Sure, we would love that and have been considering Nexus. Problem is that our Operations are swamped and getting a server to run Nexus on is not high enough on their priority list to consider at the moment. So we're opting to use the GCP one. Hence this discussion. On top of that, our secret mission is to be able to use GCP for *all* development, and then using GCP Artifact Registry already now is great for that purpose too. – Peter V. Mørch Oct 22 '20 at 13:23

0 Answers0