If I understood your situation right, than you have 2 private GKE's clusters and would like to know if it is possible to reach apps that run on one cluster from another one.
The short and general answer is Yes, that's possible.
Below you can see details and proofs from my testing.
I have quickly buld the following setup (both clusters are identical, except of Master/Service/Pod address ranges
):
cluster-4
Cluster Master version: 1.14.10-gke.27
Total size: 1
Network: default
Subnet: default
VPC-native (alias IP): Enabled
Pod address range: 10.24.0.0/14
Service address range: 10.0.32.0/20
Private cluster: Enabled
Master address range: 172.16.1.0/28
cluster-5
Pod address range: 10.60.0.0/14
Service address range: 10.127.0.0/20
Private cluster: Enabled
Master address range: 172.16.2.0/28
Firewall configuration should allow traffic from 'Pod address ranges' to Nodes. In other words traffic originated from cluster's-5 'Pod address range' shall be seliveed to the Cluster-4's node IP.
For that I have added a rule that allows tcp traffic from 10.60.0.0/14 to "all nodes in network" on port 31526.
My nginx runs on cluster-4
$ kubectl get nodes -o wide
NAME STATUS AGE VERSION INTERNAL-IP EXTERNAL-IP
gke-cluster-4 Ready 48m v1.14.10-gke.27 10.128.0.12
$ kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-nginx NodePort 10.0.40.183 <none> 80:31526/TCP 2m23s
So my nginx shall be available at 10.128.0.12:31526
for all the clients that are outside of my cluster-4
.
I've ran a busybox pod on cluster-5
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox-68dc67fcc5-gwd95 1/1 Running 0 11s
and tried acessing my nginx.
$ kubectl exec -it busybox-68dc67fcc5-gwd95 -- sh
# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 10.60.0.1 0.0.0.0 UG 0 0 0 eth0
10.60.0.0 * 255.255.255.0 U 0 0 0 eth0
# wget 10.128.0.12:31526
Connecting to 10.128.0.12:31526 (10.128.0.12:31526)
saving to 'index.html'
index.html 100% |************| 612 0:00:00 ETA
'index.html' saved
The 10.60.0.1
is the IP address that is assigned to cbr0
interface on my cluster-5
node.
kiwi@gke-cluster-5 ~ $ ifconfig
cbr0: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST> mtu 1460
inet 10.60.0.1
Last but not least, I have checked waht's the official documentation says on topic and it appeared that it is well in line with my findings.
Private clusters give you the ability to isolate nodes from having inbound and outbound connectivity to the public internet. This is achieved as the nodes have internal RFC 1918 IP addresses only.
Even though the node IP addresses are private, external clients can reach Services in your cluster. If we speak about NodePort type service, than it is needed creating an Ingress.
GKE uses information in the Service and the Ingress to configure an HTTP(S) load balancer. External clients can then call the external IP address of the HTTP(S) load balancer.