I have two projects in my Google Cloud. One of them has an endpoint I want to reach (published as Node Port). This port should be in internal network, but somehow accessible from second project.
My research of this problem got me into shared vpc networks. I created host and service projects. I set up shared vpc network. I created gke clusters in the same network and subnet. It seems like everything should be connected, but in reality there is still no connection.
Any ideas about reasons and further steps?
There are settings of shared subnet: Here you can see I have service and pods IP subranges for two gke clusters: lets denote them as gateway and tts. They share the same network and subnet, so I suppose pods should be able to communicate
There is an opened Nodeport on 10.0.37.97 as service in tts cluster (service project). It is accesible from pod in tts cluster (checked it with kubectl exec bash). And it is not accessible with the same script from pod in gateway cluster (host project).
here are clusters network settings
Networking for tts (service project)
Private cluster Disabled
Network shared-net
Project: HOST_PROJECT_ID
Subnet tts
Project: HOST_PROJECT_ID
VPC-native traffic routing Enabled
Cluster pod address range (default) 10.4.0.0/14
Maximum pods per node 110
Service address range 10.0.32.0/20
Intranode visibility Disabled
NodeLocal DNSCache Disabled
HTTP Load Balancing Enabled
Subsetting for L4 Internal Load Balancers Disabled
Control plane authorized networks
Disabled
Network policy Disabled
Dataplane V2 Disabled
Networking for gateway (host project)
Private cluster Disabled
Network shared-net
Subnet tts
VPC-native traffic routing Enabled
Cluster pod address range (default) 10.192.0.0/14
Maximum pods per node 110
Service address range 10.196.0.0/20
Intranode visibility Disabled
NodeLocal DNSCache Disabled
HTTP Load Balancing Enabled
Subsetting for L4 Internal Load Balancers Disabled
Control plane authorized networks
Disabled
Network policy Disabled
Dataplane V2 Disabled