AWS Elastic File System is a shared file system that works like NFS. It will cost basically nothing to store one configuration file. You can encrypt it if you want to, though EFS doesn't provide encryption as part of the server yet.
If you want a shared file system but EFS isn't available you could run an instance with an NFS share. For storing configuration files you could run this on a t2.nano for about $3/month. You could alternately run an NFS share from a single machine you already have. Obviously this machine should be on a private subnet with no route to the internet, suitable hardened.
S3 is a good place to store files, and it supports encryption at rest. I'm always a little wary of server side encryption, since the keys and the data are both stored in AWS, albeit AWS claims it's in a very secure way inside KMS and that keys can't be retrieved. However if you use client side encryption you have to store encryption keys on instances, which is almost certainly less secure than using a well designed, third party service like KMS.
You might also consider using EC2 user data to provide information to the instances at boot. This would be a good way to do things with a fleet of auto scaled instances as you define it once and it's available to every instance that's started.
Somewhat related is assigning an EC2 instance an IAM role before it boots, this lets you define a policy that allows the EC2 access to other AWS resources. This probably won't help your use case, but is related.
You could also use a sync technology like rsync, bittorrent sync, or dropbox sync to move files between machines. It's probably not a great solution in this case but it could work for other use cases.