I'm configuring the GKE Multi-Cluster Service according to document1 and document2 and inspired by the multi-cluster-serice-communication-in-gke tutorial.
Somehow I'm failing on the "Registering a Service for export" on second cluster.
I'm using the following YAML file to export the ngnix-service on the first(fleet cluster)
# export.yaml
kind: ServiceExport
apiVersion: net.gke.io/v1
metadata:
namespace: multy-service-poc
name: ngnix-service
and it's exported well - accessible from the another cluster and ServiceExport's status on the first cluster is True
k describe serviceexport ngnix-service
Name: ngnix-service
Namespace: multy-service-poc
Labels: <none>
Annotations: <none>
API Version: net.gke.io/v1
Kind: ServiceExport
Metadata:
Creation Timestamp: 2021-12-11T11:22:37Z
Finalizers:
serviceexport.net.gke.io
Generation: 2
Managed Fields:
API Version: net.gke.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2021-12-11T11:22:37Z
API Version: net.gke.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.:
v:"serviceexport.net.gke.io":
f:spec:
f:status:
.:
f:conditions:
Manager: Google-GKE-MCS
Operation: Update
Time: 2021-12-11T11:22:39Z
Resource Version: 58873
UID: a42dc51c-93ff-4526-9c04-9702ed7ba95d
Spec:
Status:
Conditions:
Last Transition Time: 2021-12-11T11:22:38Z
Status: True
Type: Initialized
Last Transition Time: 2021-12-11T11:22:40Z
Status: True
Type: Exported
Events: <none>
When I'm exporting the same service on the second cluster using same YAML file, it's failed and the status is False
kubectl describe serviceexport ngnix-service
Name: ngnix-service
Namespace: multy-service-poc
Labels: <none>
Annotations: <none>
API Version: net.gke.io/v1
Kind: ServiceExport
Metadata:
Creation Timestamp: 2021-12-13T07:29:36Z
Finalizers:
serviceexport.net.gke.io
Generation: 2
Managed Fields:
API Version: net.gke.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2021-12-13T07:29:36Z
API Version: net.gke.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.:
v:"serviceexport.net.gke.io":
f:spec:
f:status:
.:
f:conditions:
Manager: Google-GKE-MCS
Operation: Update
Time: 2021-12-13T07:31:10Z
Resource Version: 1191220
UID: 45bb42a8-effc-4a9d-95e8-22ff736a54af
Spec:
Status:
Conditions:
Last Transition Time: 2021-12-13T07:30:03Z
Status: True
Type: Initialized
Last Transition Time: 2021-12-13T07:31:10Z
Message: Conflicting "Project". Using from oldest service export in cluster "projects/mssp-eugene-mcs1/locations/us-east1/clusters/mssp-eugene-mcs-k8s-cluster1"
Reason: Conflict
Status: False
Type: Exported
Events: <none>
I see the clusters in the hub membership as well:
gcloud container hub memberships list --project eugene-mcs1
NAME EXTERNAL_ID
eugene-mcs-k8s-cluster2 e943ed80-6a49-4781-897c-57ae3266fb37
eugene-mcs-k8s-cluster1 074d59f2-fce2-491e-a99e-6d2b8587803c
Expected behavior is ngnix-service exported from both clusters and exposes the ngnix pods from both clusters accordingly.
My configuration is 2 K8S clusters in the different projects using Shared VPC from the third host project.
Thanks