6

I'm trying to use AWS Systems Manager Session Manager to connect to my EC2 instances.

These are private EC2 instances, without public IP, sitting on a private subnet in a VPC with Internet access through a NAT Gateway.

Network ACLs are fully opened (both inbound and outbound), but there's no Security Group that allows SSH access into the instances.

I went through all the Session Manager prerequisites (SSM agent, Amazon Linux 2 AMI), however, when I try to connect to an instance through the AWS Console I get a red warning sign saying: "We weren’t able to connect to your instance. Common reasons for this include".

Then, if I add a Security Group to the instance that allows SSH access (inbound port 22) and wait a few seconds, repeat the same connection procedure and the red warning doesn't come up, and I can connect to the instance.

Even though I know these instances are safe (they don't have public IP and are located in a private subnet), opening the SSH port to them is not a requirement I would expect from Session Manager. In fact, the official documentation says that one of its benefits is: "No open inbound ports and no need to manage bastion hosts or SSH keys".

I searched for related posts but couldn't find anything specific. Any ideas what I might be missing?

Thanks!

Nicolás García
  • 161
  • 1
  • 3
  • 6
  • Systems Manager does not use port 22. Based on your description, you are probably selecting **EC2 Instance Connect**, which is a different connection method. – John Rotenstein Jun 28 '20 at 05:23
  • No, I was indeed using Systems Manager Session Manager. Please see my answer below, I could partially solve it. Additional information about the outbound port range that I had to open is appreciated. Thanks! – Nicolás García Jun 28 '20 at 21:11
  • https://aws.amazon.com/premiumsupport/knowledge-center/ec2-systems-manager-vpc-endpoints/ << Follow this for accessing EC2 that have no public IP, in VPCs that don't have IGW and use the AWS backbone to connect to SSM. I was able to get this work, but it take about 30-40 seconds for the session to start. Does anyone else have this issue? – tera Jul 23 '21 at 18:59

7 Answers7

4

Please make sure you are using Session Manager Console, not EC2 Console to establish the session.

From my own experience, I know that sometimes using EC2 Console option of "Connect" does not work at first.

However, if you go to AWS Systems Manager console, and then to Session Manager you will be able to Start session to your instance. This assumes that your SSM agent, role and internet connectivity are configured correctly. If yes, you should be able to see the SSM managed instances for which to start your ssh session.

Also Security Group should allow outbound connections. Inbound ssh are not needed if you setup up everything correctly.

Wolfgang Kuehn
  • 12,206
  • 2
  • 33
  • 46
Marcin
  • 215,873
  • 14
  • 235
  • 294
  • This is legit! It fixed my issue. Also to add, at the time of this comment, it takes 30 - 40 minutes after attaching the specific IAM role, for the instance to even show up in the Node Inventory list within the Systems Manager console. – user791134 Jan 07 '22 at 03:51
  • @user791134 No problem. If the answer was helpful, its upvote would be appreciated. – Marcin Jan 07 '22 at 03:53
  • 1
    Yep. The upvote you see is from me! I gave it an upvote around the same time made the comment! Thank you again good sir!! – user791134 Jan 17 '22 at 22:03
3

Despite what all the documentation says, you need to enable HTTPS inbound and it'll work.

A Kingscote
  • 294
  • 4
  • 7
  • Wow, thats exactly true... I have spend a lot of time to understand why session manager doesnt work (sometimes stack or shell was not available), https inbound in SG solved the problem... but the question is why? Can someone explain it for me? – przemekost Mar 10 '22 at 13:04
2

I had similar issue and what helped me was restarting SSM agent on a server. I've logged in with SSH and then run:

sudo systemctl restart amazon-ssm-agent

Session Manager Console immediately displayed EC2 instance as available.

Alexander Karmes
  • 2,438
  • 1
  • 28
  • 34
0

Thanks for your response. I tried connecting using Session Manager Console instead of EC2 console and didn't work. Actually I get the red warning only the first time I try to connect without the SSH port opened. Then I assign a security group with inbound access to port 22 and can connect. Now, when I remove the security group and try connecting again, I don't get the red warning in the console but a blank screen, nothing happens and I can't get in.

That being said, I found that my EC2 instances didn't have any outbound port opened in the security groups. I opened the entire TCP port range for the output, without opening SSH inbound and could connect. Then I restricted the outbound port range a little bit: tried opening only the ephemeral range (reserved ports blocked) and that problem came up again.

My conclusion is that all the TCP port range has to be opened for the outbound. This is better than opening the SSH port 22 for inbound, but there's something I still don't fully understand. It is reasonable that outbound ports are needed in order to establish the connection and communicate with the instance, buy why reserved ports? Does the SSH server side use a reserved port for the backwards connection?

Nicolás García
  • 161
  • 1
  • 3
  • 6
  • 1
    Ah! From [Systems Manager prerequisites - AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-prereqs.html): _"Verify that you are allowing HTTPS (port 443) outbound traffic to the Systems Manager endpoints."_ As a general rule, you should never need to restrict Outbound security group settings unless you are doing severe security lockdowns. Similarly, never change NACL settings unless there is a specific reason (eg creating a DMZ). – John Rotenstein Jun 28 '20 at 21:41
  • 1
    In fact I just confirmed that only outbound 443 port is enough for it to work. Regardless of that, I'm going to fully open the outbound range in the security group as you mentioned, but it was good to confirm the specific cause of the issue. Thanks! – Nicolás García Jun 29 '20 at 05:10
0

I was stuck with this similar issue. My Security Groups and NACLS had inbound and outbound ports open only to precise ports and IPs as needed in addition to ephemeral port range of 1024~65535 for all internal IPs.

Finally what worked was, opening up Port 443 outbound for all internet IPs. Even restricting 443 outbound to internal IP ranges did not work.

Bhargav
  • 1
  • 1
0

The easiest way to do this would be to create the 3 VPC interface endpoints that SSM requires in your VPC and associated subnets (Service Names: com.amazonaws.[REGION].ssm, com.amazonaws.[REGION].ssmmessages and com.amazonaws.[REGION].ec2messages).

Then, you can add an ingress and an egress rule for only port 443 that allows communication within the VPC.

This is more secure than opening up large swathes of the Internet to your private instances and faster since the traffic stays on AWS' own network and does not have to traverse NATs, or gateways.

Here are some helpful links to AWS documentation:

eatsfood
  • 950
  • 2
  • 21
  • 31
0

Another item that tripped me up: Make sure the security group for your VPC endpoints is open to all inbound connections on 443, and all outbound.

I had mine originally tied to the security group of the EC2 instances I was connecting to (e.g. SG1), and when I created another security group (e.g. SG2), I could not connect. The above reason was why... originally I set up my VPC Endpoints Security Group to reference SG1, instead of all inbound connections on 443.

j7skov
  • 557
  • 7
  • 22