1

Does Serverless Framework support the ability to deploy the same API to multiple cloud providers (AWS, Azure and IBM) and route requests to each provider based on traditional load balancer methods (i.e. round robin or latency)?

Does Serverless Framework support this function directly?

Does Serverless integrate with global load balancers (e.g. dyn or neustar)?

Zanon
  • 29,231
  • 20
  • 113
  • 126

1 Answers1

5

Does Serverless Framework support the ability to deploy the same API to multiple cloud providers (AWS, Azure and IBM)

Just use 3 different serverless.yml files and deploy each function 3 times.

and route requests to each provider based on traditional load balancer methods (i.e. round robin or latency)?

No, there is no such support for multi-cloud load balancing

The Serverless concept is based on trust: you trust that your Cloud provider will be able to handle your traffic with proper scalability and availability. There is no multi-cloud model, a single Cloud provider must be able to satisfy your needs. To achieve this, they must implement a proper load-balacing schema internally.

If you don't trust on your Cloud provider, you are not thinking in a serverless way. Serverless means that you should not worry about the infra the supports your app.

However, you can implement a sort of multi-cloud load balancing

When you specify a serverless.yml file, you must say which provider (AWS, Azure, IBM) will create those resources. Multi-cloud means that you need one serverless.yml file per each Cloud, but the source code (functions) can be the same. When you deploy the same function to 3 different providers, you will receive 3 different endpoints to access them.

Now, which machine will execute the Load Balance? If you don't trust that a single Cloud provides enough availability, how will you define who will serve the Load Balance feature?

The only solution that I see is to implement this load-balacing in your frontend code. Your app would know the 3 different endpoints and randomize the requests. If one request returns an error, the endpoint would be marked as unhealthy. You could also determine the latency for each endpoint and select a preferred provider. All of this in the client code.

However, don't follow this path. Choose just one provider for production code. The SLA (service level agreement) usually provides a high availability. If it's not enough, you should still stick with just one provider and have in hand some scripts to easily migrate to another cloud in case of a mass outage of your preferred provider.

Zanon
  • 29,231
  • 20
  • 113
  • 126