4

I have a VPC network set-up with a VPN connecting to the on-prem network. Cloud router is used to create routes (BGP) internally and with the VPN network. One of the projects is hosting public Kubernetes cluster with Internal and External IPs (With Alias IP ranges). It's configured to be a part of the VPC network (using one of its subnetworks).

I'm trying to connect service running on Kubernetes to resources in the internal network (via Cloud VPN). Unfortunately, this doesn't seem to be possible. The connection is timing out.

Cloud VPN, Cloud Router is set up properly and there is access/communication between networks. The only issue is, that I can't access on-prem resources from Kubernetes containers

PsychoX
  • 175
  • 3
  • What do you meant by 'resources in the internal network'? Are you referring about the resources in your on-prem network? Most of the resources (including the cluster) inside the [VPC](https://cloud.google.com/vpc/) should be able to access/connect using the [Cloud VPN](https://cloud.google.com/vpn/docs/concepts/overview). Have you checked [this](https://stackoverflow.com/questions/50606312/how-to-setup-vpn-from-on-premises-to-google-cloud-vpc),[2](https://stackoverflow.com/questions/36326200/are-there-any-best-practices-on-how-to-connect-a-gke-cluster-with-an-on-premise) similar threads? – Digil Dec 16 '19 at 15:35
  • @Digil Yes. I mean a resource/server in the on-prem network. I've done some more research and I'm probably affected by this issue https://github.com/kubernetes/kubernetes/issues/46170 – PsychoX Dec 17 '19 at 07:59
  • So were you able to fix it using using iptables + daemonset as mentioned [there](https://github.com/kubernetes/kubernetes/issues/46170)? If yes, please post it as an answer so that the other community members can take benefits from it. – Digil Dec 17 '19 at 14:01

1 Answers1

1

It sounds like you are running into an issue with GKE routing. This is usually because either the pod traffic is being NAT'd and should not be or the pod traffic should have the source remain the pod IP but is instead being NAT'd to use the node IP. In either case, your firewalls are configured to allow one type of traffic and not the other.

To address this, GKE has the [IP masq Agent][1] which you can use to tweak which range of destination IPs the cluster should SNAT for, thus giving you control over which traffic is NAT'd and which isn't and allows you to properly predict which traffic to allow over the VPN and through your firewall.

Patrick W
  • 582
  • 2
  • 8