7

I'm having a hard time reconciling some online advice that injecting secrets (usually passwords) as environment variables into docker containers is "not secure" with the native features of AWS ECS and even EKS where secrets stored within AWS Secrets Manager are provided as environment variables. I want to use the native features of these platforms, but it seems that this is not a good idea.

I really like the native /run/secrets approach of "raw" docker - but that feature doesn't scale up to SecretsManager+ECS. I'm left thinking that the only "secure" way of managing secrets and exposing to my app is to write dedicated application code that queries AWS Secrets Manager directly. Is this conclusion correct? Or can I trust the platform?

References:

And counter-arguments:

Pedro Rodrigues
  • 637
  • 6
  • 16
Peter McEvoy
  • 2,816
  • 19
  • 24

1 Answers1

1

I think most of the problems described in those articles can be mitigated by removing/replacing the variable immediately after it has been read and acknowledged. Once it has been removed there is little to no difference between the two methods. Perhaps the ENV method might even get a point for there will be nowhere to read the value from, while the secret file will be there to the end and as mounted files cannot be removed.

I agree with the articles that things which send you reports on crashes might indeed accidentally expose sensitive values. But it's up to you to decide when to load them. Therefore, you can first deal with sensitive data, then enable the things that will handle logs/crashes.

There is one rare case when you must avoid using environment variables with sensitive data: cron. Having cron in containers is a bad practice by itself and on top of that it exposes all environment variables in email headers:

X-Cron-Env: <SHELL=/bin/sh>
X-Cron-Env: <HOME=/home/user>
X-Cron-Env: <PATH=/usr/bin:/bin>
anemyte
  • 17,618
  • 1
  • 24
  • 45
  • I agree that if I opt to write application code to read/write env vars, then I have more control: at that point I may as well have my app read from secrets manager directly and bypass env vars. But ECS and EKS provide a _platform feature_ to present secrets to my container as env vars and then I don't control the "when" they are read or what happens during a crash. (indeed, can containers can manipulate the env presented to them by the orcehstator to "blank" the values?) My issue is _why_ isn't this being flagged more widely as an insecure/useless feature? – Peter McEvoy Jul 08 '21 at 11:47
  • 1
    @PeterMcEvoy It's hard to call it 'useless' when it's convenient, easy, and so widely supported. As for insecure, I think we can agree that without 'extra circumstances' (like debugging libraries) it is not a vulnerability on its own. In other words, the threat comes not from environment variables but from other tools and human errors that can expose variables. As for why there is no option to mount a secret as a file - this is mystery for me too. – anemyte Jul 08 '21 at 12:24
  • Thanks @anemyte - it's a pity... We're refactoring existing code to containers and that code already differentiates between environmental config and secrets.. Enviromental config is a no-brainer as env vars... Was hoping to be convinced that secrets could be the same, but as one article says: it breaks the Principle of Least Surprise - we'll just have to handle secrets in code. – Peter McEvoy Jul 08 '21 at 15:09
  • @PeterMcEvoy Have you considered having secrets in EKS rather that AWS SM? Kubernetes allow mounting secrets as files. – anemyte Jul 08 '21 at 19:22