0

The scenario has 2 kubernetes clusters with Istio replicated control planes configured and a forward for .global zone in kube-dns. Requests made from the originating pod must be consistent in the naming in both clusters, with this meaning that the usage of ".global" directly from the originating pod should not be used.

With this in mind, the idea is to have the originating pod in cluster 1 (bar-1.namespace1) to reach for (foo-1.namespace2) while having istio to redirect the traffic for "foo-1.namespace2" to "foo-1.namespace2.global" so it could be picked up by the ServiceEntry that points to the second cluster.

Right now, this is working but only because when trying to reach "foo-1.namespace2", the configuration in resolv.conf in the pod will autocomplete the call with ".global" but a way to go straight to the resource instead of "failing" to it is desirable.

Idea is to have this workflow:

  1. bar-1.namespace1 tris to reach foo-1.namespace2
  2. ServiceEntry that matches "foo-1.namespace2" host so this names exists in cluster 1.
  3. VirtualService that matches "foo-1.namespace2" so it can route to a different destination being "foo-1.namespace2.global"
  4. ServiceEntry that matches "foo-1.namespace2.global" which is actually responsible for sending traffic to the cluster 2.

I can't make this logic to work as expected as points "2" and "3" seem to make no difference if they exists.

At this point I am able to communicte between clusters without using ".global" from within the pod, but only because ".global" is a search domain in the pod /etc/resolv.conf. So point "4" is working as expected, is just how the traffic gets there what is not good.

Current conf is this:

ServiceEntry meant to "pick up" the call so I don't get the "host not found" error.

apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  name: alias-foo-1.namespace2
  namespace: namespace1
spec:
  hosts:
  - foo-1.namespace2
  location: MESH_INTERNAL
  ports:
  - name: cockroachdb-grpc
    number: 26257
    protocol: TCP
  - name: cockroachdb-http
    number: 8080
    protocol: http
  resolution: DNS

VirtualService meant to transform the destination to the one with ".global" in its name.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: redirect-foo-1.namespace2
  namespace: namespace1
spec:
  hosts:
  - foo-1.namespace2
  http:
  - route:
    - destination:
        host: foo-1.namespace2.global
    rewrite:
      authority: foo-1.namespace2.global

ServiceEntry that actually sends the traffic to the second cluster. This is working.

apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  name: foo-1.namespace2-global
  namespace: namespace1
spec:
  hosts:
  - foo-1.namespace2-global
  location: MESH_INTERNAL
  ports:
  - name: cockroachdb-http
    number: 8080
    protocol: HTTP
  - name: tcp-cockroachdb
    number: 26257
    protocol: TCP
  resolution: DNS
  addresses:
  - 240.0.4.10
  endpoints:
  - address: 10.0.0.1
    ports:
      cockroachdb-http: 15443
      tcp-cockroachdb: 15443
carrotcakeslayer
  • 809
  • 2
  • 9
  • 33
  • as you are saying, .global should not be used from the pod. That's kube-dns that redirects these requests to another dns pod; within istio-system namespace. Do you have this dns pod up and running? can you share your config? – suren Nov 02 '20 at 13:05
  • istiocoredns is up and running and I am able to resolve any .global address as kube-dns config has the proper zone with a forward to the istiocoredns service hosting this zone. In fact, this is a working configuration as the pod resolv.conf adds the .global to the pod name and then istiocoredns is able to find it and direct traffic to the proper endpoint. I would like to have something cleaner as using the search domains from resolv.conf seems that I'm "failing towards" the serviceentry instead of explicitly picking it. – carrotcakeslayer Nov 02 '20 at 14:13

0 Answers0