1

I have a bit of an odd issue. I have set up ping monitoring for uptime on some of my servers up in AWS that have a VPN Tunnel connection back to my local datacenter. On my domain controllers I can ping to all the EC2 instances private IPs with no issue, but on my monitoring server, I can only ping to the IP's of the instances that do not have an elastic IP attached to them. All systems have the same security groups on them allowing all internal traffic

The Monitoring server will try to ping the private ip of the EC2 instance, but it fails, and when I run a tracert, it looks like the ping is trying to go out to the internet to ping the system. However, if I ping a system without an elastic IP, it has no issue pinging that EC2 server. In addition, I have no trouble pinging any of these systems from my system, only the monitoring server

Example: On the Monitoring Server I ping AWS InstanceA which has the ip 10.100.0.2 and no elastic ip. I can resolve pings to this server with no issue

If I ping AWS InstanceB which has the ip 10.100.0.3 and an elastic ip, I can't resolve pings to this server.

  • 1
    What is the configuration of the Inbound Security Group on the instance that does not respond to Pings? Also, can you please clarify from WHERE you are initiating the pings -- are they from an EC2 instance within the same VPC? – John Rotenstein Dec 13 '21 at 23:56
  • Sorry for the long wait to reply. The configuration of the inbound security group is the same security group as applied to 5 other AWS servers, all of which have no problem responding to pings or being connected to by the monitoring server. They just don't have public IPs. The monitoring server is onprem, and is on our server subnet. Other servers on the subnet are not experiencing this same routing issue and have no trouble pinging and rdp'ing up to any of the aws servers, regardless of whether or not they have a public IP. – Bob Bobberson Dec 15 '21 at 22:31

0 Answers0