5

We have many internet services, what are the considerations whether to use alb per service or single alb for all using listener rule pointing to target group.

The services has its own clusters/target group with different functionality and different url.

Can one service spike impact other services? Is it going to be a single point of failure ? Cost perspective ? Observability, monitoring, logs ? Ease of management ?

Avihay Tsayeg
  • 444
  • 4
  • 10

3 Answers3

2

Personally I would normally use a single ALB and use different listeners for different services.

For example, I have service1.domain.com and service2.domain.com. I would have two hostname listeners in the same ALB which route to the different services.

In my experience ALB is highly available and scales very nicely without any issues. I've never had a service become unreachable due to scaling issues. ALB's scale based on "Load Balancer Capacity Units" (LBCU). As your load balancer requires more capacity, AWS automatically assigns more LBCU's which allows it to handle more traffic.

Source: Own experience working on an international system consisting of monoliths and microservices which have a large degree of scaling between timezones.

Alex Bailey
  • 1,260
  • 12
  • 27
1

You don't have impact on service B if service A has a spike, but the identification of which service is having bad times could be a little pain. For monitoring perspective it's is a bit hard because is not that easy to have a fast identification of which service/target is suffering. For management, as soon as different teams need to create/management its targets it can create some conflicts.

I wouldn't encourage you using that monolith architecture.

Bruno Criado
  • 120
  • 7
0

From cost perspective you can use one load balancer with multi forward rules, but using a single central load balancer for an entire application ecosystem essentially duplicates the standard monolith architecture, but increases the number of instances to be served by one load balancer enormously. In addition to being a single point of failure for the entire system should it go down, this single load balancer can very quickly become a major bottleneck, since all traffic to every microservice has to pass through it. Using a separate load balancer per microservice type may add additional overhead but it make single point of failure per microservice in this model, incoming traffic for each type of microservice is sent to a different load balancer.

Asri Badlah
  • 1,949
  • 1
  • 9
  • 20