6

I have Azure Kubernetes Service cluster and I have VM outside the cluster, from different virtual network, from which I try to connect to my container Pod App which is being run on TCP Port 9000. I must not use Public IP and That is not HTTP connection, but I need to connect using the TCP connection. For that I followed instructions from this link: https://learn.microsoft.com/en-us/azure/aks/ingress-internal-ip I defined YAML file for helm install

controller:
  service:
    annotations:
      service.beta.kubernetes.io/azure-load-balancer-internal: "true"

I configured nginx:

helm install nginx-ingress ingress-nginx/ingress-nginx \
    -f internal-ingress.yaml \
    --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
    --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
    --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux

NGINX configuration after that is that it has Ports 80 and 443:

kubectl get services -o wide

NAME                                        TYPE        CLUSTER-IP    EXTERNAL-IP     PORT(S)
nginx-ingress-ingress-ngingx controller  LoadBalancer   10.0.36.81    10.33.27.35     80:31312/TCP,443:30653/TCP

After that I run the helm upgrade to ensure my tcp port 9000 is being configured

helm upgrade nginx-ingress ingress-nginx/ingress-nginx -f internal-ingress.yaml --set tcp.9000="default/frontarena-ads-aks-test:9000"

This gave me the ConfigMap setting automatically when I check with "kubectl get configmaps":

apiVersion: v1
data:
  "9000": default/frontarena-ads-aks-test:9000
kind: ConfigMap

I have also edited my nginx Service:

spec:
  clusterIP: 10.0.36.81
  externalTrafficPolicy: Cluster
  ports:
  - name: http
    nodePort: 31312
    port: 80
    protocol: TCP
    targetPort: http
  - name: https
    nodePort: 30653
    port: 443
    protocol: TCP
    targetPort: https
  - name: 9000-tcp
    nodePort: 30758
    port: 9000
    protocol: TCP
    targetPort: 9000

I have my deployed App Pod :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontarena-ads-deployment
  labels:
    app: frontarena-ads-deployment
spec:
  replicas: 1
  template:
    metadata:
      name: frontarena-ads-aks-test
      labels:
        app: frontarena-ads-aks-test
    spec:
      nodeSelector:
        "beta.kubernetes.io/os": linux
      restartPolicy: Always
      containers:
      - name: frontarena-ads-aks-test
        image: fa.dev/:test1
        ports:
          - containerPort: 9000
  selector:
    matchLabels:
      app: frontarena-ads-aks-test
---
apiVersion: v1
kind: Service
metadata:
  name: frontarena-ads-aks-test
spec:
  type: ClusterIP
  ports:
  - protocol: TCP
    port: 9000
  selector:
    app: frontarena-ads-aks-test

I configured and deployed Ingress Controller YAML in same default namespace as well to connect my Ingress with above Service (I suppose it can connect through it based on the ClusterIP ):

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ads-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
    - http:
        paths:
          - path: /
            backend:
              serviceName: frontarena-ads-aks-test
              servicePort: 9000

Now this issue is the following:

if I try to target from my VM app deployed outside the AKS cluster in different virtual network with Ingress Controller IP or its DNS name configured by Azure Admins and the Port 9000 - I do not get any response which brings the conclusion that Ingress Controller is not propagating the network connection to my service which targets the app running on Port 9000 on my Pod.

I can't find the reason why Ingress Controller will not forward the traffic to my service which Targets the Port 9000 which is the Port on which My App Pod is being run.

Thank you!!!

vel
  • 1,000
  • 1
  • 13
  • 35
  • Are the VM in the different VNet with the AKS? And which network type do you use for the AKS? – Charles Xu Feb 15 '21 at 06:18
  • @CharlesXu yes they are. We are using on Linux CNI networking. Service has ClusterIP and NGINX uses Internal Load Balancer. Only TCP connection is required – vel Feb 15 '21 at 06:27
  • Do you know the internal ingress just can be accessible in the same VNet? So you need to use the VNet peering. – Charles Xu Feb 15 '21 at 06:31
  • @CharlesXu thanks for response. I am not an expert for Azure what does this mean what exactly should be configured? On all examples on internet and Stackoverflow for Ingress I didn't find any similar. VMs and AKS are in different VNET and Region and because of that we have Proxy implemented by our Azure Admins with DNS name and IP which connects to this 10.33.x.x EXTERNAL IP adress of the Ingress Controller becausw 10.33 is non routeable network and we need ti have Reverse Proxy on Server to connect to it. Did you mean on that by VNet Peering? TELNET from VMs on that Proxy Ip works! Thanks – vel Feb 15 '21 at 08:59
  • Hello, I don't think you'll need `Ingress` resource as you've already added the passthrough in the `Service` and the `Configmap`. – Dawid Kruk Feb 15 '21 at 17:25
  • Does the peering solve your problem? – Charles Xu Feb 16 '21 at 00:52
  • @AndreyDonald could you please provide the output when you are trying to access your service? And try to set the `host` field in the `Ingress` spec, it seems to be missing (even if it's supported by the K8s Spec). – Emanuel Bennici Feb 18 '21 at 14:16
  • @DawidKruk Hi David, in which case is needed to have then Ingress kind resource? Only if we have HTTP traffic that needs to be routed? Am I right? Thanks – vel Feb 21 '21 at 20:54

1 Answers1

1

you have only one pod, change the matadata of the deployment to match the metadata of the pod:

....
kind: Deployment
metadata:
  name: frontarena-ads-aks-test
  labels:
    app: frontarena-ads-aks-test
spec:
  replicas: 1
  template:
    metadata:
      name: frontarena-ads-aks-test
      labels:
        app: frontarena-ads-aks-test
....
pvod
  • 44
  • 5