1

I've been working with kubernetes for the past 6 months and we've deployed a few services.

We're just about to deploy another which stores encrypted data and puts the keys in KMS. This requires two service accounts, one for the data and one for the keys.

Data access to this must be audited. Since access to this data is very sensitive we are reluctant to put both service accounts in the name namespace as if compromised in any way the attacker could gain access to the data and the keys without it being audited.

For now we have one key in a secret and the other we're going to manually post to the single pod.

This is horrible as it requires that a single person be trusted with this key, and limits scalability. Luckily this service will be very low volume.

Has anyone else came up against the same problem? How have you gotten around it?

cheers

Requirements

  • No single person ever has access to both keys (datastore and KMS)
Mark
  • 1,544
  • 1
  • 14
  • 26

2 Answers2

0

Data access to this must be audited

If you enable audit logging, every API call done via this service account will be logged. This may not help you if your service isn't ever called via the API, but considering you have a service account being used, it sounds like it would be.

For now we have one key in a secret and the other we're going to manually post to the single pod.

You might consider using Vault for this. If you store the secret in vault, you can use something like this to have the environment variable pushed down into the pod as an environment variable automatically. This is a little more involved than your process, but is considerably more secure.

You can also use Vault alongside Google Cloud KMS which is detailed in this article

jaxxstorm
  • 12,422
  • 5
  • 57
  • 67
  • Auditing is enabled but it doesn't help that we see they downloaded the data store and the keys and can now decrypt the whole database. – Mark Jul 23 '18 at 15:38
  • I'll take a look at vault and see if that satisfies the requirements thanks. – Mark Jul 23 '18 at 15:39
0

What you're describing is pretty common - using a key/ service account/ identity in Kubernetes secrets to access an external secret store.

I'm a bit confused by the double key concept - what are you gaining by having a key in both secrets and in the pod? If secrets are compromised, then etcd is compromised and you have bigger problems. I would suggest you focus instead on locking down secrets, using audit logs, and making the key is easy to rotate in case of compromise.

A few items to consider:

  • If you're mostly using Kubernetes, consider storing (encrypted) secrets in Kubernetes secrets.
  • If you're storing secrets centrally outside of Kubernetes, like you're describing, consider just using a single Kubernetes secret - you will get Kubernetes audit logs for access to the secret (see the recommended audit-policy), and Cloud KMS audit logs for use of the key.
  • Maybe I didn't describe the issue correctly, we are storing the service accounts in secrets not in pods, git or anything else. The issue is that anyone who gains access to the namespace could read the secrets and access the datastore and KMS bypassing our audit logging. So we have added a API endpoint so the second key can be posted to the pod keeping it in local memory only. – Mark Jul 23 '18 at 15:24
  • Got it. Kubernetes audit logging will record requests to that secret normally. If an attacker can access the secrets (not via the front door), you are correct that it wouldn't be audit logged. – Maya Kaczorowski Jul 25 '18 at 16:53