I have an existing LoadBalancer which is configured to listen on 4443 and its mTLS TCP based backend. Now i am trying to convert this existing Load balancer setup to work via Kubernetes nginx ingress controller. Backends configured are listening on port 19000.
My existing LB setup looks like this with MTLS example of how it works.
curl -v --cacert ./certs_test/ca.crt --key ./certs_test/server.key --cert ./certs_test/server.crt -k https://10.66.49.164:4443/api/testtools/private/v1/test/testGetTenants
With nginx ingress controller i have created a private LB override default 443 port to 4443 in args section like below.
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-chart
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/ingress-chart
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --https-port=4443
- --default-ssl-certificate=$(POD_NAMESPACE)/tls-secret
- --enable-ssl-passthrough
I want to terminate MTLS at ingress level so that the authentication will work as expected. Below is an example of the setup which i currently have.
Clusterip service yaml
apiVersion: v1
kind: Service
metadata:
name: testtools-service
spec:
type: ClusterIP
ports:
- port: 19000
protocol: TCP
targetPort: 19000
name: https-testtools
selector:
app: testtools
Deployment yaml spec
apiVersion: apps/v1
kind: Deployment
metadata:
name: testtools-deployment
spec:
selector:
matchLabels:
app: testtools
replicas: 3
template:
metadata:
labels:
app: testtools
spec:
containers:
- name: testtools
image: tst-testtools-service-np:214
livenessProbe:
httpGet:
path: /api/testtools/private/health
port: 19001
scheme: HTTPS
initialDelaySeconds: 300
readinesProbe:
httpGet:
path: /api/testtools/private/health
port: 19001
scheme: HTTPS
initialDelaySeconds: 120
ports:
- containerPort: 19000
name: https-testtools
volumeMounts:
- mountPath: /test/logs
name: test-volume
- mountPath: /etc/pki
name: etc-service
- mountPath: /etc/environment
name: etc-env
- mountPath: /etc/availability-domain
name: etc-ad
- mountPath: /etc/identity-realm
name: etc-idr
- mountPath: /etc/region
name: etc-region
- mountPath: /etc/fault-domain
name: etc-fd
- mountPath: /etc/hostclass
name: etc-hc
- mountPath: /etc/physical-availability-domain
name: etc-pad
resources:
requests:
memory: "8000Mi"
volumes:
- name: test-volume
hostPath:
path: /test/logs
type: Directory
- name: etc-service
hostPath:
path: /etc/pki
type: Directory
- name: etc-env
hostPath:
path: /etc/environment
type: File
- name: etc-ad
hostPath:
path: /etc/availability-domain
type: File
- name: etc-idr
hostPath:
path: /etc/identity-realm
type: File
- name: etc-region
hostPath:
path: /etc/region
type: File
- name: etc-fd
hostPath:
path: /etc/fault-domain
type: File
- name: etc-hc
hostPath:
path: /etc/hostclass
type: File
- name: etc-pad
hostPath:
path: /etc/physical-availability-domain
type: File
dnsPolicy: Default
My ingress spec
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: testtools-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
#nginx.ingress.kubernetes.io/use-regex: "true"
#nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/secure-backends: "true"
#nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
#nginx.ingress.kubernetes.io/auth-tls-secret: "default/tls-new-cert"
#nginx.ingress.kubernetes.io/auth-tls-verify-depth: "3"
#nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
nginx.ingress.kubernetes.io/auth-tls-secret: default/my-certs
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
nginx.ingress.kubernetes.io/auth-tls-verify-depth: "3"
nginx.ingress.kubernetes.io/proxy-ssl-name: ingress-nginx-controller
nginx.ingress.kubernetes.io/proxy-ssl-secret: default/my-certs
nginx.ingress.kubernetes.io/proxy-ssl-verify: "on"
nginx.ingress.kubernetes.io/proxy-ssl-verify-depth: "5"
spec:
tls:
- secretName: my-certs
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: testtools-service
port:
number: 19000
what i am trying to achieve is do passthrough at load balancer level and terminate at ingress to ensure mtls is working as expected and api call to succeed. Want to know if my ingress setup is correct. Would like to know if this achievable using k8s ingress controller with LB as service this is straight forward but to reduce creating too many LB i was trying this setup. Termination at ingress controller is working fine but further calls just dont provide any response. Your inputs on this would help.
$ curl -v --cacert ./certs_test/ca.crt --key ./certs_test/server.key --cert ./certs_test/server.crt -k https://10.66.48.120:4443/api/testtools/private/v1/test/testGetTenants
* About to connect() to 10.66.48.120 port 4443 (#0)
* Trying 10.66.48.120...
* Connected to 10.66.48.120 (10.66.48.120) port 4443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: ./certs_test/ca.crt
CApath: none
* skipping SSL peer certificate verification
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* subject: O=Acme Co,CN=Kubernetes Ingress Controller Fake Certificate
* start date: Feb 23 18:03:52 2022 GMT
* expire date: Feb 23 18:03:52 2023 GMT
* common name: Kubernetes Ingress Controller Fake Certificate
* issuer: O=Acme Co,CN=Kubernetes Ingress Controller Fake Certificate
> GET /api/prodtools/private/v1/common/getTenants HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 10.66.48.120:4443
> Accept: */*
>
* Connection #0 to host 10.66.48.120 left intact
P
$