2

I'm configuring the GKE Multi-Cluster Service according to document1 and document2 and inspired by the multi-cluster-serice-communication-in-gke tutorial.

Somehow I'm failing on the "Registering a Service for export" on second cluster.

I'm using the following YAML file to export the ngnix-service on the first(fleet cluster)

# export.yaml
kind: ServiceExport
apiVersion: net.gke.io/v1
metadata:
 namespace: multy-service-poc
 name: ngnix-service

and it's exported well - accessible from the another cluster and ServiceExport's status on the first cluster is True

k describe serviceexport ngnix-service                                                                             
Name:         ngnix-service
Namespace:    multy-service-poc
Labels:       <none>
Annotations:  <none>
API Version:  net.gke.io/v1
Kind:         ServiceExport
Metadata:
  Creation Timestamp:  2021-12-11T11:22:37Z
  Finalizers:
    serviceexport.net.gke.io
  Generation:  2
  Managed Fields:
    API Version:  net.gke.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
    Manager:      kubectl-client-side-apply
    Operation:    Update
    Time:         2021-12-11T11:22:37Z
    API Version:  net.gke.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .:
          v:"serviceexport.net.gke.io":
      f:spec:
      f:status:
        .:
        f:conditions:
    Manager:         Google-GKE-MCS
    Operation:       Update
    Time:            2021-12-11T11:22:39Z
  Resource Version:  58873
  UID:               a42dc51c-93ff-4526-9c04-9702ed7ba95d
Spec:
Status:
  Conditions:
    Last Transition Time:  2021-12-11T11:22:38Z
    Status:                True
    Type:                  Initialized
    Last Transition Time:  2021-12-11T11:22:40Z
    Status:                True
    Type:                  Exported
Events:                    <none>

When I'm exporting the same service on the second cluster using same YAML file, it's failed and the status is False

kubectl  describe serviceexport ngnix-service 
Name:         ngnix-service
Namespace:    multy-service-poc
Labels:       <none>
Annotations:  <none>
API Version:  net.gke.io/v1
Kind:         ServiceExport
Metadata:
  Creation Timestamp:  2021-12-13T07:29:36Z
  Finalizers:
    serviceexport.net.gke.io
  Generation:  2
  Managed Fields:
    API Version:  net.gke.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
    Manager:      kubectl-client-side-apply
    Operation:    Update
    Time:         2021-12-13T07:29:36Z
    API Version:  net.gke.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .:
          v:"serviceexport.net.gke.io":
      f:spec:
      f:status:
        .:
        f:conditions:
    Manager:         Google-GKE-MCS
    Operation:       Update
    Time:            2021-12-13T07:31:10Z
  Resource Version:  1191220
  UID:               45bb42a8-effc-4a9d-95e8-22ff736a54af
Spec:
Status:
  Conditions:
    Last Transition Time:  2021-12-13T07:30:03Z
    Status:                True
    Type:                  Initialized
    Last Transition Time:  2021-12-13T07:31:10Z
    Message:               Conflicting "Project". Using from oldest service export in cluster "projects/mssp-eugene-mcs1/locations/us-east1/clusters/mssp-eugene-mcs-k8s-cluster1"
    Reason:                Conflict
    Status:                False
    Type:                  Exported
Events:                    <none>

I see the clusters in the hub membership as well:

gcloud container hub memberships list --project eugene-mcs1
NAME                          EXTERNAL_ID
eugene-mcs-k8s-cluster2  e943ed80-6a49-4781-897c-57ae3266fb37
eugene-mcs-k8s-cluster1  074d59f2-fce2-491e-a99e-6d2b8587803c

Expected behavior is ngnix-service exported from both clusters and exposes the ngnix pods from both clusters accordingly.

My configuration is 2 K8S clusters in the different projects using Shared VPC from the third host project.

Thanks

  • When you try to export the same service in the second cluster and fail, what is the message the system shows? Can you please share the details of the error message? – Ismael Clemente Aguirre Dec 13 '21 at 19:32
  • No error, I just saw that no "network endpoint groups" added to the traffic director, so I'v checked for the serviceexport on the second cluster description and found the conflict message and status. – Eugene Gorelik Dec 14 '21 at 12:58
  • Did you configure the MCS following the official google documentation? Also, be sure to create the same namespace in both clusters. https://cloud.google.com/kubernetes-engine/docs/how-to/multi-cluster-services#enabling – Ismael Clemente Aguirre Dec 22 '21 at 21:13
  • The issue under investigating of the Google Cloud support team. No ETA for solution. – Eugene Gorelik Jan 05 '22 at 08:10

1 Answers1

3

Actually, the GKE Multi-Cluster Service works only for cluster within the same host project even the documentation told that it works for the clusters in the different projects.
This is an official response to my ticket from the Google Cloud Support:

I have received the following feedback from the Google product engineering team:
... As there are one Host project, one fleet service project 1, and one service project 2 ,any given service’s backends must be entirely contained within a single project ;in other words, backends will only aggregate from clusters within a single project, though they can be from multiple clusters in that service project 1 (which still allows for fault tolerance as those clusters can be from different regions, just not across a project boundary). Those backends via the VPC Shared MCS service (i.e. yourservice.yournamespace.svc.clusterset.local), the service project 2 can be reachable to the backends in service project 1 using MCS as long as they have the namespace (i.e. yournamespace) present.

Therefore, your target service, all backing pods (and therefore all ServiceExports) should be in clusters in VPC service project 1. For fault tolerance, there should be several clusters in VPC service project 1 exporting this Service that are from different regions. That service may be in clusters in VPC service project 2 as the VPC service project 1 backends ...

As discussed above, the high availability of services with MCS using the clusters from different Shared VPC service projects (not the clusters in different regions ) is unfortunately currently not possible. This is because the MCS workload cannot yet resolve common exported services with different project IDs. I hope my explanation is clear, if there is anything that you don’t understand, please let me know.

If you want I can create a Public Issue Tracker to request this feature so you may follow any updates around this issue. However, I am unable to provide an estimated time of when this would be implemented if at all.