1

I have a google cloud system running on subnet 10.128.1.0/24 and a remote network on 10.173.2.2/23 with a google VPN IPSEC tunnel up and running.

I have the google remote network set to 10.173.2.2/23, and the local IP ranges 0.0.0.0/0 with the reciprocal in the remote site. The intention is to force all network traffic from the remote server through the VPN.

I am able to ping etc to the local LAN side of my google server (10.128.1.2) but I can't get to its public IP port (or any public IP).

Is there an easyish way to setup a google VPN tunnel that will route all traffic to the public IP on my servers, or the web in general?

Cheers

  • Same issue here... any hint? – baraka Sep 28 '17 at 13:30
  • Can you provide more information on your use case? My understanding is that you are trying ICMP packets (originated in your premises) to get in the tunnel. Then reach the VM and have a packet reply coming from the external interface back in the tunnel? External IPs on GCE VMs are an abstraction that is performed by Google network (If you list the interfaces in your VMs using “sudo ifconfig -a” you will only see an internal IP interface and a loopback). – Carlos Dec 07 '17 at 20:50
  • That been said, even if the VM had an external IP directly attached to it, some routing could have to be defined in the VM per se, so that packets from the external IP could be send via the internal one. You might want to use “traceroute” on different check points to see how the routing is being done. – Carlos Dec 07 '17 at 20:50

1 Answers1

0

By using Google Cloud VPN you've connected you on-premises network to your Virtual Private Cloud (VPC) network through an IPsec VPN connection.

Let's have a look at the IP addresses that VM instance can have:

Each VM instance can have one primary internal IP address, one or more secondary IP addresses, and one external IP address. To communicate between instances on the same Virtual Private Cloud (VPC) network, you can use the internal IP address for the instance. To communicate with the internet, you must use the instance's external IP address unless you have configured a proxy of some kind. Similarly, you must use the instance's external IP address to connect to instances outside of the same VPC network unless the networks are connected in some way, like via Cloud VPN. Both external and internal primary IP addresses can be either ephemeral or static.

As a result, VM instance is using internal IP to communicate with your server located in on-premises network.

More details about external IP you can find in this section:

You can assign an external IP address to an instance or a forwarding rule if you need to communicate with the Internet, with resources in another network, or need to communicate with a resource outside of Compute Engine.

As it was mentioned by @Carlos, external IP address on GCE VMs are an abstraction level and you'll only see internal IP address on interfaces if you connect to your VM instance. In addition, if the VM instance doesn't have an external IP it should use Cloud NAT.

So, it's intended behavior that you were able to ping only internal IP address though Cloud VPN connection from your on-premises network and not able to reach external IP addresses of VM instances and resources at the Internet.

You can reach resources at the Internet through the VPN connection by using 3rd party solutions that you can find at the Marketplace.

Serhii Rohoza
  • 1,424
  • 2
  • 5
  • 15