I created the GKE Private Cluster via Terraform (google_container_cluster with private = true
and region
set) and installed the stable/openvpn
Helm Chart. My setup is basically the same as described in this article: https://itnext.io/use-helm-to-deploy-openvpn-in-kubernetes-to-access-pods-and-services-217dec344f13 and I am able to see a ClusterIP
-only exposed service as described in the article. However, while I am connected to the VPN, kubectl
fails due to not being able to reach the master.
I left the OVPN_NETWORK
setting as the default (10.240.0.0
), and changed the OVPN_K8S_POD_NETWORK
and subnet mask setting to the secondary range I chose when I created my private subnet that the Private Cluster lives in.
I even tried adding 10.240.0.0/16
to my master_authorized_networks_config
but I'm pretty sure that setting only works on external networks (adding the external IP of a completely different OVPN server allows me to run kubectl
when I'm connected to it).
Any ideas what I'm doing wrong here?
Edit: I just remembered I had to set a value for master_ipv4_cidr_block
in order to create a Private Cluster. So I added 10.0.0.0/28
to the ovpn.conf file as push "route 10.0.0.0 255.255.255.240"
but that didn't help. The docs on this setting states:
Specifies a private RFC1918 block for the master's VPC. The master range must not overlap with any subnet in your cluster's VPC. The master and your cluster use VPC peering. Must be specified in CIDR notation and must be /28 subnet.
but what's the implication for an OpenVPN client on a subnet outside of the cluster? How do I leverage the aforementioned VPC peering?