My EDBs are separated by:
We upload all keys with a special suffix to every new EC2 instance as we bootstrap it and then remove all keys that have not been used by any of the recipes in the run_list at the end of the first time chef-client runs (which will be right as the instance starts.)
All files are uploaded as owner and group "root" and with only read permissions.
Every recipe that uses an EDB, generates the EDB name and the key file name at recipe run time by concatenating 'edb_' + the nodes environment + recipe/item-specific name + '.key' and then looks for the key by this name. (If it doesn't exist this throws an exception by default.)
Thus, for our couchdb server, running a role called, 'couch', to get the credentials that we're using for the admins user(s) in the dev environment, the recipe looks for a key named 'edb_dev_couch.key'
It then search a data bag named 'edb_dev' for an item named 'couch_credentials'.
For managing keys, I'm currently using the simple approach of:
- upload all EDB keys via bootstrap script and append '_x' to the key names
- Have each recipe that uses an EDB look in the keys directory for the key that it needs.
- if the key exists with a '_x' suffix, rename the key to remove the '_x' suffix.
- add a recipe at the end of every run_list that deletes all keys with a '_x' suffix
Hopefully, this limits the time that the keys outside the scope of a single node are susceptible until the machine has been bootstrapped and had the first run of chef_client.
This is our first round of testing how to secure the keys, but so far it meets our current needs as it prevents one rooted dev server from being able to immediately access any other servers credentials that are stored in an EDB.
To maintain having one recipe at the end of every run list, I use a knife exec job that makes sure this delete_keys recipe is exactly the last recipe on every node.