3

I have a 3 node kubernetes cluster. There are several services each with pods spread across all the nodes. Right now, as the individual pod is told to make a request from some external server in my network, that request gets routed out via the node's IP and the server responds and all is fine that way. My issue is that in most cases, this is (for example) an snmpget to a device with ACL that says it will only answer to a certain IP... For ingress I have a VIP to the whole cluster, but I need to somehow route all outbound requests via this same address and make sure that the responses make it back to the original requesting container.

Do I have to set up a proxy server on my nodes or is there a built in kubernetes way to do this that I am missing?

Thanks for the help

Bryancan
  • 31
  • 1
  • I don't think this is a possibility in kubernetes, at least not using any of the built in resources. You'll have to setup a proxy server, or a NAT/forwarding/masquarading "service" from inside your cluster to push traffic outside. Here's a similar question that uses some GKE/GCP specific magic (if that's an option for you) - https://stackoverflow.com/questions/41133755/static-outgoing-ip-in-kubernetes – ffledgling Apr 28 '18 at 18:34
  • Thanks for the input. This will not be in any cloud provider, but instead a private set of VMware VMs. I wonder if I can make a proxy server that is also installed on the same VMs but outside of kubernetes that all nodes can be told to communicate through? – Bryancan Apr 30 '18 at 14:40
  • You can. You can even create a "special" kubernetes pod that has the host network mounted inside it and you can expose that behind a kubernetes service. – ffledgling May 01 '18 at 08:59

0 Answers0