1

I have a GKE cluster deployed with version 1.20.10-gke.1600. I have created the internal ingress with GCE and I got the internal ip assigned to my internal ingress. However, I am not able to ping to this internal ingress IP from the VM in same region and network. Ping to external ingress is working fine. I read the document below and it says ping to internal TCP/UDP is not possible as it is not deployed as network device. However, I do not see anything regarding internal HTTPS load balancer.

ping 10.128.0.174
Pinging 10.128.0.174 with 32 bytes of data:
Request timed out.
Ping statistics for 10.128.0.174:
    Packets: Sent = 1, Received = 0, Lost = 1 (100% loss)

Question is: why am I not able to ping to my internal LB ingress IP? I am trying ping from the VM in same region and network. curl to internal ingress IP is working but ping is not.

Wytrzymały Wiktor
  • 11,492
  • 5
  • 29
  • 37
saurabh umathe
  • 315
  • 2
  • 17
  • What's the actual qustion ? What are you trying to achieve ? Please specify what do you want to know :) – Wojtek_B Nov 18 '21 at 08:12
  • question is why am I not able to ping to my internal LB ingress IP. I am trying ping from the VM in same region and network. Curl to internal ingress IP is working but not ping... – saurabh umathe Nov 18 '21 at 09:22
  • I'm pretty sure you can't ping the IP for the Internal LB. I believe will also be true for the new External LB currently in preview. – Gari Singh Nov 18 '21 at 10:06
  • Thanks @GariSingh, does it hold true for both internal HTTP LB and Internal TCP/UDP load balancer ? And is there any specific reason why we can't ping the LB IP, am I missing something basic ? – saurabh umathe Nov 18 '21 at 10:14
  • 1
    The internal LBs are based on Envoy proxy and the IP is a virtual IP so there is actually nothing to "ping". – Gari Singh Nov 18 '21 at 10:17
  • Thanks, it makes sense. – saurabh umathe Nov 18 '21 at 16:29
  • @GariSingh To confirm, when you say internal LB is it internal tcp/udp LB or internal http/https Load balancer or both? – saurabh umathe Nov 18 '21 at 16:36

1 Answers1

1

Cluster IP is just (as Gari Singh wrote) a virtual appliance that won't respont to ping. It's intended behavior.

Documentation about pinging LB's internall address you linked clarly says:

This test demonstrates an expected behavior: You cannot ping the IP address of the load balancer. This is because internal TCP/UDP load balancers are implemented in virtual network programming — they are not separate devices.

and then explains why:

nternal TCP/UDP Load Balancing is implemented using virtual network programming and VM configuration in the guest OS. On Linux VMs, the Linux Guest Environment performs the local configuration by installing a route in the guest OS routing table. Because of this local route, traffic to the IP address of the load balancer stays on the load balanced VM itself. (This local route is different from the routes in the VPC network.)

So - if for example you're trying to set up some sort of your custom health check make sure, take also into account that "pinging" LB's internal address from inside the cluster is also unreliable:

Don't rely on making requests to an internal TCP/UDP load balancer from a VM being load balanced (in the backend service for that load balancer). A request is always sent to the VM that makes the request, and health check information is ignored. Further, the backend can respond to traffic sent using protocols and destination ports other than those configured on the load balancer's internal forwarding rule.

Even more:

This default behavior doesn't apply when the backend VM that sends the request has an --next-hop-ilb route with a next hop destination that is its own load balanced IP address. When the VM targets the IP address specified in the route, the request can be answered by another load balanced VM.

You can, for example, create a destination route of 192.168.1.0/24 with a --next-hop-ilb of 10.20.1.1.

A VM that is behind the load balancer can then target 192.168.1.1. Because the address isn't in the local routing table, it is sent out the VM for Google Cloud routes to be applicable. Assuming no other routes are applicable with higher priority, the --next-hop-ilb route is chosen.

Finally - check out the table of supported protocols - ICMP works only with external TCP/UDP load balancer.

Wojtek_B
  • 4,245
  • 1
  • 7
  • 21
  • Thank you for these details @Wojtek_B. This is very helpful. This makes sense to me now. So, the document I linked clearly mentioned internal tcp/udp load balancer and not the internal http/https load balancer which confused me on the working of Internal Load balancer in GCP. – saurabh umathe Nov 18 '21 at 16:27
  • My question was regarding internal HTTP/HTTPS load balancer and not internal tcp/udp load balancer, I assume whatever information you have shared here hold true for internal http/https load balancer as well(?)... Thanks... – saurabh umathe Nov 18 '21 at 16:33
  • Correct - the same applies for HTTP(S) load blaancer as well. Have a loot at the [table of supporter protocols](https://cloud.google.com/load-balancing/docs/features#protocol-from-clients) - ICMP works only for external TCP/UDP load balancer. – Wojtek_B Nov 19 '21 at 09:50
  • Thanks @Wojtek_B. We have many External HTTP(S) classic load balancers and I can still ping to their LB IP. I have even created one to test and ping is working for the same. But, as per this table ICMP works only for external TCP/UDP. – saurabh umathe Nov 19 '21 at 11:11