I have a kind kubernetes cluster running on a ubuntu VM, I have created this cluster following the documentation for enabling kubernetes ingress functionality.
This cluster has several services running inside of it, of these services there are 3 which I want to expose externally. One of these is a REST based service, the others use websocket connections.
On the host VM, after following the docs and some fiddling, I can access these services by curl'ing localhost.
I now want to expose these services via a specific interface (ens160) so that I can hit these services with some client-side automation another team has been building out.
My first attempt was to use IP tables to map traffic coming in on 80/443 to 127.0.0.1, and this works well with the REST service.
sudo iptables -t nat -A PREROUTING -p tcp -i ens160 --dport 443 -j DNAT --to-destination 127.0.0.1:443
sudo iptables -A FORWARD -p tcp -d 127.0.0.1 --dport 443 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
sudo iptables -t nat -A PREROUTING -p tcp -i ens160 --dport 80 -j DNAT --to-destination 127.0.0.1:80
sudo iptables -A FORWARD -p tcp -d 127.0.0.1 --dport 80 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
However the websocket connections are not establishing.
For me this method seems like a 'flimsy' approach and I am wondering if there is a better way to expose this kubernetes cluster to my 'corp network' then performing DNAT on packets coming in.
Am I going about this the wrong way?
Thanks, Max Sargent