2

We've got an API micro-services infrastructure hosted on Azure VMs. Each VM will host several APIs which are separate sites running on Kestrel. All external traffic comes in through an RP (running on IIS).

We have some API's that are designed to accept external requests and some that are internal APIs only.

The internal APIs are hosted on scalesets with each scaleset VM being a replica that hosts all of the internal APIs. There is an internal load balancer(ILB)/vip in front of the scaleset. The root issue is that we have internal APIs that call other internal APIs that are hosted on the same scaleset. Ideally these calls would go to the VIP (using internal DNS) and the VIP would route to one of the machines in the scaleset. But it looks like Azure doesn't allow this...per the documentation:

You cannot access the ILB VIP from the same Virtual Machines that are being load-balanced

So how do people set this up with micro-services? I can see three ways, none of which are ideal:

  1. Separate out the APIs to different scalesets. Not ideal as the services are very lightweight and I don't want to triple my Azure VM expenses.
  2. Convert the internal LB to an external LB (add a public IP address). Then put that LB in it's own network security group/subnet to only allow calls from our Azure IP range. I would expect more latency here and exposing the endpoints externally in any way creates more attack surface area as well as more configuration complexity.
  3. Set up the VM to loopback if it needs a call to the ILB...meaning any requests originating from a VM will be handled by the same VM. This defeats the purpose of micro-services behind a VIP. An internal micro-service may be down on the same machine for some reason and available on another...thats' the reason we set up health probes on the ILB for each service separately. If it just goes back to the same machine, you lose resiliency.

Any pointers on how others have approached this would be appreciated.

Thanks!

swannee
  • 3,346
  • 2
  • 24
  • 40
  • Take a look on Azure Service Fabric. We also use this behavior for VM endpoint scaling and it works like a charm. The negative point is, that the configuration currently works only via ARM. – Benjamin Abt Aug 02 '16 at 13:25

2 Answers2

0

I think your problem is related to service discovery.

Load balancers are not designed for that obviously. You should consider dedicated softwares such as Eureka (which can work outside of AWS). Service discovery makes your microservices call directly each others after being discovered.

Also take a look at client-side load balancing tools such as Ribbon.

cdelmas
  • 840
  • 8
  • 15
0

@Cdelmas answer is awesome on Service Discovery. Please allow me to add my thoughts:

For services such as yours, you can also look into Netflix's ZUUL proxy for Server and Client side load balancing. You could even Use Histrix on top of Eureka for latency and Fault tolerance. Netflix is way ahead of the game on this.

You may also look into Consul.io product for your cause if you want to use GO language. It has a scriptable configuration for better managing your services, allows advanced security configurations and usage of non-rest endpoints. Eureka also does these but requires you add a configuration Server (Netflix Archaius, Apache Zookeeper, Spring Cloud Config), coded security and support accesses using ZUUL/Sidecar.

SISLAM
  • 50
  • 5