I have an AWS EKS Cluster with only public subnets (we don't need private subnets for security reasons and I really want to avoid the NAT data transfer charges). In the cluster, we have serviceA and serviceB. Both are exposed via public-facing LB backed Ingress (Traefik).
So here is my problem: We have a configuration option 'serviceB_url' This config is both used by serviceA to access serviceB from INSIDE the cluster and to generate Links that should work from OUTSIDE the cluster. So basically, I want the same URL to work for both inside and outside the cluster. The DNS points to the public-facing Loadbalancer IP and of course I can resolve the name from inside the Cluster.
But here is my problem: I cannot access it. As the Loadbalancer IP is public, the traffic leaves the VPC using the internet gateway and hits the Loadbalancer from the outside, using the nodes public IP, which is NOT whitelisted.
My thoughts on this so far:
We had public and private subnets before. With a NAT gateway, we could simply whitelist the public IP of the NAT. Although this worked, I don't think this is a clean solution because the traffik takes a quite unnecessary path. Plus, as mentioned before, we want to get rid of the NAT gateway because of the quite high charges.
I am aware that Ingress is by design to expose to the OUTSIDE, and a Service should be used to expose to the INSIDE. But using the service, I lose my reverse proxy in the middle. Plus I wonder how this would work for a service that is picky about either the URL used in the request or if TLS is a requirement. Taking this thought further, I could imagine some sort of internally deployed reverse proxy service that does some URL rewrite magic. But yeah, I already dislike this for the hackyness.
In a classic (means more static) infrastructure, I would probably solve this by using /etc/hosts entries pointing to the private of the node running the service our use some custom DNS that is used internally.
I read that I can use CoreDNS to rewrite URLs, eg. foo.example.com to foo-internal.example.com. So I could just rewrite the external URL to serviceB.my-namespace.svc.cluster-domain.example. Once again, I wonder how this would work for apps picky with their URL and/or TLS. Would probably need to create a reverse proxy for that. And, again, sounds hacky to me.
Of course I could go and change my app and split the config option into 'serviceB_url_internal' and 'serviceB_url_external'. I would just like to solve this withing Kubernetes somehow.
In the end what I think I really want is a cluster wide DNS config that simply returns reverse proxies clusterIP instead of the ip of the internet-facing loadbalancer for the URL I want to access. This would solve the problem and I would not need any URL rewrites or other funky things. Can I configure this somehow using Ingress itself? Would be great to have at least an opt-in for something like this. Or is there a service that does this? (Would this actually work?)
If you've read this far, thank you :)
I just wonder if I am missing something obvious here, and it's actually totally easy to configure. How could this be solved in a clean way? Or is the idea of using the same URL to access a service from inside and outside the cluster just plain wrong? I can imagine this is not that big of a deal for on-premise installation, because the 'public ip' of the nodes is well known and whitelisting is not a big issue there.