I have been using K8S on GCP (GKE with their platform provided HTTPS Global Load Balancer) but find the load balancer hard to manage when with hundreds of domains and dozens of unique public backend sites in K8S where each site get it's own backend service definition and is plugged into a single load balancer.
The cluster is not setup with native VPC IP addresses, so every site gets a NodePort service defined for it and that NodePort gets added to a backend service. Because of this the health checks from the load balancer are somewhat wrong as a single pod giving an error response will mean that an entire group will be considered unhealthy when in fact the pod could be somewhere else in the cluster entirely. Similarly due to the NodePort configuration a request may be routed to one group (zone) and get route by the K8S service to another node in another zone to be handled.
If the cluster had native VPC IPs enabled then the sites could be configured with Network Endpoint Groups (NEG) and route directly to the pods, handling the circuitous routing and the health checks. But it wouldn't reduce the complexity of the load balancer.
But adding a gateway to K8S like an Istio gateway, nginx, ambassador, traefik etc. would all provide a layer where layer 7 routing could be configured on K8s would minimize the number of configuration on, and add features missing from, the Google load balancer.
Is this method of adding a layer 7 gateway going to decrease the overall reliability of the application?