0

I'm trying to figure out how I can verify if Kubernetes Ingress is correctly working AND if it is checking the Client Certificate. At the moment, this part of infrastructure is composed as such: enter image description here

The HAproxy configuration is as follows:

#------------------
# Global settings
#------------------
# Global and default settings here, hidden for simplicity


#------------------
# frontend instances
#------------------
frontend stats
    bind    *:80
    mode http
  
# Other frontend blocks here
  
frontend frontend_ssl_prod_publicip3
    bind    privateip4:443,privateip5:443
    log-format %Ci:%Cp\ [%t]\ %ft\ %b/%s\ %Tw/%Tc/%Tt\ %B\ %ts\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq
    mode tcp
    maxconn 4000
    stick-table type ip size 100k expire 30s store conn_rate(3s),conn_cur
    acl ingress req_ssl_sni -i api.shop.domain1.suffix
    acl ingress  req_ssl_sni -i b2b.shop.domain1.suffix
    tcp-request inspect-delay 5s
    tcp-request connection accept if { src -f /etc/haproxy/ourservice-whitelist.lst }
    tcp-request connection reject if { src_conn_rate ge 20 }
    tcp-request connection reject if { src_conn_cur ge 30 }
    tcp-request connection track-sc0 src
    tcp-request content accept if { req_ssl_hello_type 1 }
    use_backend kubernetes_ingress if ingress
    use_backend static_ssl_prod_be if www_ssl_prod
  


#------------------
# backend instances
#------------------
# other backend blocks here, hidden
  
backend kubernetes_ingress
    mode tcp
    stick-table type binary len 32 size 30k expire 30m
    acl clienthello req_ssl_hello_type 1
    acl serverhello rep_ssl_hello_type 2
    tcp-request inspect-delay 5s
    tcp-request content accept if clienthello
    tcp-response content accept if serverhello
    stick on payload_lv(43,1) if clienthello
    stick  store-response payload_lv(43,1) if serverhello
    server kube-node01 kubeip1:31443 check weight 100 send-proxy agent-check agent-port 8080
    server kube-node02 kubeip2:31443 check weight 100 send-proxy agent-check agent-port 8080
    server kube-node03 kubeip3:31443 check weight 100 send-proxy agent-check agent-port 8080
 
  
# Other backends here

On the kube-node01, the service kube-proxy is running.

I am trying to curl -vvv b2b.shop.domain1.suffix from:

  1. The loadbalancer node

  2. The kube0# node

  3. The kube-proxy container

    but I just see:

* Rebuilt URL to: b2b.shop.domain1.suffix/
*   Trying publicip3...
* TCP_NODELAY set
* connect to publicip3 port 80 failed: Connection timed out
* Failed to connect to b2b.shop.domain1.suffix port 80: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to b2b.shop.domain1.suffix port 80: Connection timed out

How should verify the ingress ?

Community
  • 1
  • 1
lsambo
  • 300
  • 3
  • 21
  • Have you try to reset your IP tables (http://insanelabs.net/linux/linux-reset-iptables-firewall-rules/) ? – Malgorzata May 06 '20 at 12:07
  • Hi MaggieO, thank you for supporting, where should I reset the iptables ? On all the nodes ? – lsambo May 07 '20 at 09:55
  • When there are multiple servers to manage, it’s common practice to manage the firewall rules from a central place. A few sets of firewall rules could be written for front-end servers, back-end databases, etc. When the firewall rules need adjustment, the central control could push new rules to servers as a file and use iptables-restore to update each host. (See: https://medium.com/swlh/manage-iptables-firewall-for-docker-kubernetes-daa5870aca4d). But first try to reset them for network on host b2b.shop.domain1.suffix. Can you let me know if it work ? – Malgorzata May 07 '20 at 11:08

0 Answers0