2

I have multiple applications running on different servers but they all require the same configuration file (which has sensitive information on it). Currently, I just load the config file on all my servers but it's not very scalable and it's a pain when I want to make changes.

What are some secure solutions for having a single config for multiple servers? These are ec2 instances so if there is an AWS solution that would make like easier I'm all ears.

The solution I'm currently leaning towards was to encrypt the config and put it on AWS S3 then decrypt it server side using AWS Key Management Service (KMS). Any other ideas would be greatly appreciated.

jwerre
  • 768
  • 3
  • 12
  • 26

1 Answers1

2

AWS Elastic File System is a shared file system that works like NFS. It will cost basically nothing to store one configuration file. You can encrypt it if you want to, though EFS doesn't provide encryption as part of the server yet.

If you want a shared file system but EFS isn't available you could run an instance with an NFS share. For storing configuration files you could run this on a t2.nano for about $3/month. You could alternately run an NFS share from a single machine you already have. Obviously this machine should be on a private subnet with no route to the internet, suitable hardened.

S3 is a good place to store files, and it supports encryption at rest. I'm always a little wary of server side encryption, since the keys and the data are both stored in AWS, albeit AWS claims it's in a very secure way inside KMS and that keys can't be retrieved. However if you use client side encryption you have to store encryption keys on instances, which is almost certainly less secure than using a well designed, third party service like KMS.

You might also consider using EC2 user data to provide information to the instances at boot. This would be a good way to do things with a fleet of auto scaled instances as you define it once and it's available to every instance that's started.

Somewhat related is assigning an EC2 instance an IAM role before it boots, this lets you define a policy that allows the EC2 access to other AWS resources. This probably won't help your use case, but is related.

You could also use a sync technology like rsync, bittorrent sync, or dropbox sync to move files between machines. It's probably not a great solution in this case but it could work for other use cases.

Tim
  • 31,888
  • 7
  • 52
  • 78
  • I looked into using EFS but it's not available in my region so I can't use it :(. – jwerre Jan 25 '17 at 21:40
  • I added another paragraph with another recommendation, running an NFS share on a t2.nano or an existing machine you have in a private subnet. – Tim Jan 25 '17 at 21:46
  • 1
    EFS seems a little overkill for this. I think I'm going to go the S3 route. I can create a KMS key and assign the server role to it then I won't need to store the KMS key on the server; the config will be automatically decrypted by S3 when the object is requested. – jwerre Jan 25 '17 at 22:01
  • I don't think EFS is overkill, it's just a shared drive with a fancy name, fairly old technology. But sure S3 with KMS is fine too. – Tim Jan 26 '17 at 08:03