0

I have a ingress controller (haproxy) and 2 Microservice.

MS2 is calling MS1. MS1 has config keepAliveRequest=-1 (unlimited) and keep alive timeout = 50 min. When MS2 is calling MS1 via kube-proxy (NodePort type Service), it's ok when MS2 only create some connections (10 in my case).

But if we call MS1 from MS2 by call ingress controller. the connections alway closed every 100 request. So it will create a lot of connections.

Could anyone help me?

Edit: We was use haproxy-ingress controller lastest version in this site: https://quay.io/repository/jcmoraisjr/haproxy-ingress?tag=latest&tab=tags

Some config we has been set (Config Maps):

http-keep-alive=true
load-balance=roundrobin
maxconn=20000
nbthread=10
rate-limit=OFF
rate-limit-expire=30m
rate-limit-interva=10s
rate-limit-size=3000k
servers-increment=42
servers-increment-max-disabled=66
timeout-client=60m
timeout-connect=60m
timeout-http-keep-alive=60m
timeout-http-request=15s
timeout-queue=5s
timeout-server=60m
timeout-tunnel=1h

In Load balancing, we has some bellow config in YAML

ingress.kubernetes.io/balance-algorithm: roundrobin
ingress.kubernetes.io/maxconn-server: "20000"
ingress.kubernetes.io/ssl-redirect: "false"
ingress.kubernetes.io/timeout-http-request: 5m
ingress.kubernetes.io/timeout-keep-alive: 7h
ingress.kubernetes.io/timeout-queue: 5s
user3611168
  • 335
  • 1
  • 6
  • 27

0 Answers0