0

Trying to create a Laod Balancer resource with Kubernetes (for an EKS cluster). It works normally with the Label Selector, but we want to only have one LB per cluster, then let ingress direct services. Here is what I currently have :

kind: Service
apiVersion: v1
metadata:
  namespace: default
  name: name
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
  ports:
  - port: 80
  type: LoadBalancer

This creates a LB and gives it an internal DNS, but instances never get healthy (although they are).

Any advices

shrimpy
  • 653
  • 3
  • 11
  • 27

2 Answers2

1

Per discussion in another question you posted. I think what you want is to achieve One Load Balancer Per Cluster, referring to this: Save on your AWS bill with Kubernetes Ingress.

To achieve this, you would need to create:

  1. A Load Balancer Service with Nginx-Ingress-Controller pod as backend.
  2. Your Load balancer Service would have an External IP, point all your cluster traffic to that IP.
  3. Ingress rules that route all cluster traffic as you wish.

So your traffic would go through the following pipeline:

all traffic -> AWS LoadBalancer -> Node1:xxxx -> Nginx-Ingress-Controller Service -> Nginx-Ingress-Controller Pod -> Your Service1 (based on your ingress rules) -> Your Pod

Here is an example how to bring up a Nginx-Ingress-Controller: https://hackernoon.com/setting-up-nginx-ingress-on-kubernetes-2b733d8d2f45

Fei
  • 1,906
  • 20
  • 36
  • This is exactly what I meant yes! thanks, I'm sorry if it wasn't clear... I'll try it on! – shrimpy Jun 05 '19 at 12:48
  • Okay I've followed that guide (https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes) with the information you gave me. To be noted: I added that line " service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0" into the load balancer service as we use internal LB. (continue on next comment) – shrimpy Jun 05 '19 at 16:32
  • It is still resolving the name (usually takes a bit of time). But it curiously only have 1 of the 2 instance in healthy...? They have the same configuration. Is it because of this ? "By default the Nginx Ingress LoadBalancer Service has service.spec.externalTrafficPolicy set to the value Local, which routes all load balancer traffic to nodes running Nginx Ingress Pods. The other nodes will deliberately fail load balancer health checks so that Ingress traffic does not get routed to them." – shrimpy Jun 05 '19 at 16:35
  • Yes. you are right. https://www.asykim.com/blog/deep-dive-into-kubernetes-external-traffic-policies here is a good article explaining externalTrafficPolicy. externalTrafficPolicy is set to "cluster" by default and the config you used in article https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes set externalTrafficPolicy to local. This is why you only have one node pass the health check. – Fei Jun 06 '19 at 03:28
  • Also, there is an update to service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0. It should take true / false value now.https://github.com/kubernetes/kubernetes/issues/17620 – Fei Jun 06 '19 at 03:29
  • Okay, it finally resolved, but I'm getting a 503 service unavailable... I rebuild the LB and the ingress with the "true" settings, still getting that 503. The ingress is working but it's not redirecting correctly. I'll open a new Question on stack, would love to see your advices there – shrimpy Jun 06 '19 at 08:23
  • Here is the thread: https://stackoverflow.com/questions/56474192/kubernetes-503-unavailable-502-bad-getaway. Now I get 502 and 503, getting somewhere! – shrimpy Jun 06 '19 at 09:19
0

What does the monitoring page for the LB target group show for the failures? Are there HTTP error responses or just connection errors? Is the security group for the K8S nodes set up to allow ingress from the LB?

  • The security group of the worker nodes has the security group of the load balancer yes. – shrimpy Jun 04 '19 at 14:44
  • The pods are all healthy. The instances are healthy as well. The load balancer returns OutOfService with the instances, so it feels like it can't connect through...? . – shrimpy Jun 04 '19 at 14:51
  • Curious thing as well is that the first time I tried without selector it returned no healthy instances. Scrapped the LB, re-applied it with selectors, it worked (as expected). I changed the yaml to not used the selector, re apply (I didn't delete), and it doesn't seem to see the difference...? It still sees the 2 healthy instances – shrimpy Jun 04 '19 at 14:53
  • Is access from the LB on all ports? – Darren Reddick Jun 04 '19 at 14:57
  • What port does the target group that K8S has created in AWS show? I believe it should create a NodePort corresponding to this on each node and the LB will need access on this port – Darren Reddick Jun 04 '19 at 14:58
  • It is allowed all traffic yes : All traffic All protocol All port – shrimpy Jun 04 '19 at 15:24
  • The LB access a port that everytime changes when I recreate one. I do declare port 80 in my loadbalancer.yaml. Here is what it is now: PORT(S) 80:31101/TCP – shrimpy Jun 04 '19 at 15:29