2

I work in a mixed GCE / GKE environment as I am sure many GKE customers do.

Currently, the skyDNS service from the cluster is not exposed GCE hosts in the same project. This ip address range used by the DNS service is different from the normal application service range which is routable from all of GCE (each cluster node gets a route for its own subnet).

I specifically have a headless service in GKE I want to be able to reliably access via DNS by my GCE hosts. As a workaround, I added routing to the node hosting the DNS pod and it works.

However, I fully understand that a simple skyDNS pod restart can break my route.

My question is, can the cluster master add this route to GCE like it does for normal node services or, even better, pull the DNS address from the normal service subnets where routing already works?

Can this be done?

1 Answers1

0

The DNS service should be no different than other services. The general problem of accessing private GKE services from outside the cluster is not currently solved all that well.

Your "routing to the node" should actually work just fine across skyDNS restarts. If the pod happens to get scheduled somewhere else, kube-proxy + the pod routing rules will get traffic where it needs to go (that's actually one of the options proposed when similar questions (here & here) have been asked).

Community
  • 1
  • 1
CJ Cullen
  • 5,452
  • 1
  • 26
  • 34
  • We're trying to do the same and I keep coming back to the same responses. I see that Cluster Federation is now available, but I'm trying to connect from outside the cluster, so I think that my options are a bastion route or running kube-proxy on the GCE instance. With the former, even if kube-proxy+pod routes it to the right place internally in the cluster, that would fail if the node is removed (node failure, or autoscaling removes it), right? Is there a newer, simpler approach to this that I'm missing? – jwadsack Nov 08 '16 at 23:02