I have an AWS EKS Cluster (ver. 1.24) I use ingress-nginx helm chart (ver. 4.7.1):
dependencies:
- name: ingress-nginx
version: 4.7.1
repository: https://kubernetes.github.io/ingress-nginx
condition: ingress-nginx.enabled
alias: ingress-public
Values for it:
ingress-public:
enabled: true
controller:
publishService:
enabled: true
kind: DaemonSet
ingressClass: nginx-public
ingressClassResource:
name: nginx-public
enabled: true
default: false
controllerValue: "k8s.io/ingress-nginx-public"
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-subnets: "Subnet_EKS_1, Subnet_EKS_2"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
external:
enabled: true
externalTrafficPolicy: Cluster
config:
allow-snippet-annotations: "true"
enable-real-ip: "true"
proxy-body-size: 100M
use-proxy-protocol: "true"
use-forwarded-headers: "true"
MY PROBLEM IS:
If i apply this setup i see that Network loadbalancer (internet-facing) is created and my nodes attached to it via target groups with healthy status. I got DNS name for this load balancer, but i cant connect to it (even using this dns, to get 404, or using nc -v lb_dnsname 443
). If i deploy simple nginxdemos/hello (deployment, service, ingress with hostname) i cant connect to it too, even i see that loadbalancerts dns name is attached. Security group for eks cluster have rules to access target group ports from 0.0.0.0/0 (created automatically). Route table for subnets contains 0.0.0.0/0 as destination and my NAT gateway as target, also VPC subnet as destination and local as target.
Please, point me what i'm doing wrong? (Yes, i know that i can use AWS Load Balancer Controller, but i want use in-tree.)