3

I have a very stupid problem

My beanstalk environments have ec2 key pairs attached to them

I have since setup my EC2 roles to allow Session Manager to work. So now I dont need key pairs for my beanstalk instances at all which is great.

BUT the dumb UI seems to offer no way to remove key pairs from the configuration, only change them to different key pairs. This is very stupid because I can launch a new environment and not specify key pairs at all.

Tell me I dont have to recreate ALL my beanstalk environments just to remove their key pairs! Hopefully there an option in front of my face I missed.

EDIT: also, not sure if this should go here or in StackOverflow because beanstalk is a PAAS. seems like a server related issue but beanstalk tag only has about 450 entries here while in SO it has 6600. Ill delete and move this question to SO if this isnt the right place

red888
  • 4,183
  • 18
  • 64
  • 111

2 Answers2

1

I found a solution on this comment on GitHub:

You can remove it by running aws elasticbeanstalk update-environment --environment-name $ENV --options-to-remove 'Namespace=aws:autoscaling:launchconfiguration,OptionName=EC2KeyName replacing $ENV with your environment name.

imgx64
  • 235
  • 3
  • 10
0

You’ve got a couple of options:

  • You can remove SSH keys from existing instances by editing $HOME/.ssh/authorized_keys

  • You can disable sshd on instance altogether with systemctl stop sshd.service

  • You can **block ssh traffic* in a firewall or through the Security Group.

  • You can change the SSH key to a new one and delete the private key - it’s next to impossible to recreate the private key so no one can abuse that to get into your instance.

Hope that helps :)

MLu
  • 24,849
  • 5
  • 59
  • 86
  • well none of that is relevant because this is elastic beanstalk. these are ephemeral instances that are continuously scaled up/down and I cant edit the ASG settings outside of the beanstalk configuration. This question is specific to beanstalk not EC2 – red888 May 11 '20 at 21:38
  • @red888 At least blocking ssh traffic should be relevant - surely you can change the Security Group? Likewise changing the key to an invalid one (ie deleting the private key) should be an usable option? – MLu May 11 '20 at 21:48
  • would blocking ssh impact session manager? Also, how would deleting the private key on the instances fix this? they are recreated continuously. Do you mean deleting the EC2 key pair? If I did that my beanstalk environments would fali to add new instances right? – red888 May 11 '20 at 22:14
  • @red888 Nope blocking SSH won’t break SSM, it doesn’t use SSH port. – MLu May 11 '20 at 22:17
  • @red888 Re ssh keys - when you create a key pair through the console it keeps the *public* key and pushes that to the instances. It also gives you the *private* key (`pem` file) that you save to your laptop and use to ssh to the instance. If you delete this `pem`-file you effectively invalidate the *public* key on the instances as well because they are of no use without the *private* key / pem file. – MLu May 11 '20 at 22:22
  • well if they were _already_ compromised that doesn't help me though. I still dont see that as an option because, while very unlikely, its possible a copy of the key exists somewhere else. blocking ssh traffic is a good stop gap, I really just want to remove the keys for consistency/clarity of config as well. I'm assuming this is impossible without cloning/rebuilding my envs? – red888 May 11 '20 at 22:26
  • @red888 What do you mean with *compromised*? The key is only shared between you and AWS. If you don’t trust AWS with your SSH key how can you trust them with running your workloads and storing your data? – MLu May 11 '20 at 23:07