1

I'm building a microk8s cluster starting with 3 system nodes, meant to run several applications in containers. Application A in Container A ingests UDP packets through Service A using a NodePort configuration for all of the cluster nodes, transforms them, and posts them to Application B running as a WebApi with a Service running a LoadBalancer. The 3-node cluster runs on Ubuntu Linux VM images running on a Hyper-V hypervisor hosted on Windows Server. I can send packets from the VM host into the VMs on one machine and the packets get picked up by Application A and forwarded to Application B without problem.

On another (so far as I'm aware identically configured) server that has a completely separate cluster, but working on a different network, the packets don't get picked up by Application A. Tracing the packets going through the VM Host, they make it into the VMs, through Service A and into Application A's container, as determined by iftop. However, in the second case, when inspecting the packets through iftop within Application A's container, I see they're coming in to the container on Service A's external port, not through the different targetPort defined on the Service. Can somebody please explain this behavior to me and identify what may be impacting the port translation for the Service ?

I've tried:

  • Alternating between LoadBalancer and NodePort service types (couldn't get LoadBalancer working at all for UDP).
  • Using different ports in different ranges for the NodePort to no effect.
  • Tearing down and rebuilding both clusters from the ground up using identical configurations for Pods and Services
  • Using a bare instance of Application A on the kubernetes node and proved that it can pickup packets just fine.

I've verified that the issue is not my applications' internal processing, they've been proven to pickup and process the packets without issue in all cases experienced to date. A describe of the svc is below:

Name:                     myapp-listener
Namespace:                mynamespace
Labels:                   app=myapp
                          app.kubernetes.io/instance=myapp
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=myapp
                          app.kubernetes.io/version=0.3.2074-prerelease
                          helm.sh/chart=myapp-0.1.2076
Annotations:              meta.helm.sh/release-name: myapp
                          meta.helm.sh/release-namespace: mynamespace
Selector:                 app=myapp
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.152.183.229
IPs:                      10.152.183.229
External IPs:             10.23.176.33,10.23.176.34,10.23.176.35
Port:                     listener  31411/UDP
TargetPort:               4411/UDP
NodePort:                 listener  31535/UDP
Endpoints:                10.23.176.35:4411
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

As stated earlier, this behavior only occurs on one server, not on any other identically configured servers, the only discernible different between them being the networks on which they operate. On the faulty cluster, within Application As container, I can see the packets coming in on port 31411 while my application is listening on port 4411. On the correctly functioning server, the application receives packets on port 4411.

Alex Marshall
  • 10,162
  • 15
  • 72
  • 117

0 Answers0