2

I have an AKS cluster with an nginx ingress controller. Controller has created a service with a type LoadBalancer and Ports section looks like this (from kubectl get service):

80:31141/TCP

If I understand things correctly port 80 is a ClusterIp port that is not reachable from the outside but port 31141 is a port that is a NodePort and is reachable from outside. So I would assume that an Azure Load Balancer is sending traffic to this 31141 port.

I was surprised to find that Azure Load Balancer is set up with a rule:

frontendPort: 80
backendPort: 80
probe (healthCheck): 31141

So it actually does use the nodeport but only as a healthcheck and all traffic is sent to port 80 which presumably functions the same way as 31141.

A curious note is that if I try to reach the node IP at port 80 from a pod I only get "connection refused", but I suppose it does work if traffic comes from a load balancer.

I was not able to find any information about this on internet, so the question is how this really works and why is ALB doing it this way?

P.S. I don't have troubles with connectivity, it works. I am just trying to understand how and why it does behind the scenes.

Ilya Chernomordik
  • 27,817
  • 27
  • 121
  • 207

3 Answers3

3

I think I have figured out how that works (disclaimer: my understanding might not be correct, please correct me if it's wrong).

What happens is that load balanced traffic does not reach the node itself on port 80 nor does it reach it on an opened node port (31141 in my case). Instead the traffic that is sent to the node is not "handled" by the node itself but rather routed further with the help of iptables. So if some traffic hits the node with the destination IP of the LB frontendIP and port 80 it goes to the service and further to the pod.

As for health check I suppose it does not use the same port 80 because the request would not have a destination equal to the external IP (LB frontend IP) and rather the node itself directly, then it uses the service nodePort for that reason.

Ilya Chernomordik
  • 27,817
  • 27
  • 121
  • 207
  • I was thinking about exactly the same question recently! Did you find any more information that can prove your current theory (how to get this iptables configuration from the portal, CLI or somewhere else)? And I guess the confusing LB port configuration is shown because the backend ports are not actually set and in somewhere there is logic to return frontendPort in this case, something like: backendPort != "" ? backendPort : frontendPort And, finally, how does this magic happen? Since pods live inside nodes, somehow the requests must be "handled" by nodes, right? – curious coder Sep 01 '20 at 23:17
  • I did not manage to get any more information, but I think to prove my theory you can ssh to the node itself (you can do that via a pod, azure docs has a good explanation), then you can try to analyze all the internal affairs of kubernetes by checking iptables or similar and see what is it really doing – Ilya Chernomordik Sep 02 '20 at 08:36
  • 1
    Load balancer sends traffic to node on node's port which is then translated by Ip tables, I had a simliar confusion and this reference helped: https://github.com/kubernetes/kubernetes/issues/58759 – shariqmaws Dec 05 '21 at 09:42
1

As I see, you have some misunderstandings about the ingress ports. Let me show you some details about the ingress in AKS.

Ingress info:

enter image description here

From the screenshot, the ports 80 and 443 are the ports of the Azure LB which you can access from the Internet with the public IP associated with the LB, here the public IP is 40.121.64.51. And the ports 31282 and 31869 are the ports of the AKS nodes which you cannot access from the Internet, you can only access them from the vnet through the node private IP.

Azure LB info:

heath probes:

enter image description here

lb rules:

enter image description here

From the screenshots, you can see the health probes and the rules of the Azure LB. It uses them to redirect the traffic from the Internet to the AKS nodes' ports, and the nodes are the backend of the Azure LB.

Hope it helps you understand the traffic of the ingress in AKS.

Update:

The LB rules info:

enter image description here

Charles Xu
  • 29,862
  • 2
  • 22
  • 39
  • Thanks for the answer, I still don't understand fully. 10.0.151.179 is the private IP of the service not the node if I understand it right (at least I have a different one for node), load balancer is set up to send traffic to ports 80/443 to the node, though the nodeport of the ingress controller is 31282/31869. When I try accessing node-ip:80 (from a pod), I get connection refused, though node-ip:31282 works as expected. So the real question is why does load balancer not do mapping from 80 to 31282 and how does it work if the node refuses connection. 80:31282 means clusterip/nodeip I think. – Ilya Chernomordik Jul 24 '19 at 07:33
  • @IlyaChernomordik The ports 80 and 443 are not the ports of the nodes, it's the ports of LB. So you cannot access the through node-ip:80. For example, the traffic will come from the internet to the port 80 pf the LB, and then go to the backend of the LB, it means the AKS nodes, then it will go to the port 31282 of the node, finally go the ip of the service. – Charles Xu Jul 24 '19 at 07:42
  • @IlyaChernomordik The IP 10.0.151.179 is the service IP, It's my mistake and I have updated the answer. – Charles Xu Jul 24 '19 at 07:43
  • Maybe I don't understand how the rules work? From what I see the rules are set up to send 80 and 443 to a backend pool that has my cluster nodes to the same port 80 and 443? And the screenshot you provided looks the same. – Ilya Chernomordik Jul 24 '19 at 07:44
  • And the node ports are "merely" health checks – Ilya Chernomordik Jul 24 '19 at 07:45
  • @IlyaChernomordik The rules will also choose the nodes backend of the LB, I will give the screenshot of the rules info. Then you will understand. – Charles Xu Jul 24 '19 at 07:48
  • The screenshot is essentially the same what I see as well. But it says: frontend: 80, backend: 80, backend pool: 1 virtual machine (which is a cluster node, right?). So I understand it like this: if a traffic comes to port 80 send it further to port 80 on one of the backend nodes (which are cluster nodes in this case). Isn't that correct? – Ilya Chernomordik Jul 24 '19 at 08:10
  • @IlyaChernomordik My AKS cluster just has one node. It's a misunderstanding. Two 80 port, one belongs to the LB, one belongs to the pod, and the probe check port belongs to the node. You can take a look at the [LB for AKS](https://learn.microsoft.com/en-us/azure/aks/concepts-network#services). – Charles Xu Jul 24 '19 at 08:13
  • Well it does work like that it seems as you describe anyway, but it does not look like this in GUI. If you opened your backend pool, you will see your virtual machines with their IPs not pod/service IPs. So I guess the traffic really does go from load balancer to node on port 31282 in reality then? – Ilya Chernomordik Jul 24 '19 at 08:28
  • @IlyaChernomordik Yes, the traffic goes from the port of the LB to the node port and then go through the service to the pod. – Charles Xu Jul 24 '19 at 08:40
  • Well, I do appreciate help a lot, though I still don't understand the discrepancy between how you describe it works (lb -> node (on node port) -> service) and Load Balancer setting that show that backend pool are nodes. Is there something that is not shown in GUI that makes it route traffic to a node port perhaps? – Ilya Chernomordik Jul 24 '19 at 09:09
  • @IlyaChernomordik No, there nothing shows in the GUI. If you still do not understand the traffic. You can take a look at the feather of the Azure LB and the traffic from LB to VM. – Charles Xu Jul 24 '19 at 09:30
  • I have now run a command to see open ports on node and it indeed does not have an open port 80. So I think the GUI is quite misleading as there no indication in there that port 80 will go to 31282 (in your example). Do you know if it's possible to see in CLI? – Ilya Chernomordik Jul 24 '19 at 10:49
  • @IlyaChernomordik You can get the NSG rules of the AKS nodes group, it also ports for the nodes the same as the LB rules. – Charles Xu Jul 25 '19 at 01:04
  • 1
    I have investigated some more and I think I have found what is really going on, I am not 100% sure on that, but you can check my answer if you want :) – Ilya Chernomordik Jul 25 '19 at 08:17
0

@IlyaChernomordik - I too am trying to understand this. The Azure LB Rule indicates it will be sent to port 80 instead of nodeport! Ilya, you mentioned IPTables and referenced a Github issue that affirms as such, but doesn't give much explanation.

I found this AKS Networking Deep Dive blog, which goes in-depth with the use of IPTables and SNAT on the Node, such that requests to port 80 get routed to the Nodeport node, as shown in this image extracted from the blog: enter image description here

So it seems that Azure LB forwards requests to the port or targetport (80 or 443) and relies on IPTables on the Node to intercept and forward to the NodePort where Kubernetes routing then takes over.

Beans
  • 1,159
  • 7
  • 17