1

Application (pod) presently running in GLB<->GKE Europe (say Netherlands). I got requirement to scale the application to serve customer in US and ASIA (say SFO & Japan).

I have noticed kubemci in beta.

Please help to clarify whether whether GKE need to duplicate in US & Asia or Europe is sufficient.

Please share some best practice/recommendation for this scenario.

Manikandan
  • 11
  • 1
  • 2

2 Answers2

1

General approach is to create GKE clusters with identical configuration in regions. Whether you need a regional replica of the application or not, basically depends on the number of users in the region and response time limits.

A possible implementation you could follow is described here: How to deploy geographically distributed services on Kubernetes Engine with kubemci

Then multiple clusters could be load balanced with the multi-cluster Ingress (kubemci)

As you've noticed, kubemci is in beta at this time and has limited support. Hence it is unlikely it's is applicable for production workloads; rather for the tryout.

mebius99
  • 404
  • 2
  • 6
  • Do you have any feedback on use GCP traffic detector for multi region? – Manikandan Nov 12 '19 at 16:58
  • 1
    GKE multi-cluster Services (MCS) available. Reference link: [1] https://cloud.google.com/kubernetes-engine/docs/how-to/multi-cluster-services [2] https://cloud.google.com/blog/products/containers-kubernetes/introducing-gke-multi-cluster-services – Manikandan Dec 15 '21 at 17:35
0

I might be few weeks too late but this might be still relevant for someone.

Caution: The kubemci tool is a temporary solution intended to help users begin using multi-cluster Ingress. This tool will be replaced by an implementation using kubectl that delivers a more Kubernetes-native experience. Once the kubectl implementation is available, you will need to manually migrate any apps that use kubemci.

I don't know your architecture but if applicable use container native load balancing with a global load balancer.

Simple container native load balancer setup where you can do everything with Kubernetes resources doesn't give you global load balancing. You would need to use standalone network endpoint groups(NEG). Create load balancer manually (or with whatever tooling you use), create backend service that includes NEGs for the same service in different clusters and add backend service to the load balancer. You will get the benefits of the premium network tier (lower latency for clients globally) and traffic spillover if the service is overloaded or down in one of the regions.

Tanel Mae
  • 136
  • 1