0

I'm interested in migrating from a manually managed AWS EC2 cloud with multiple webapps to AWS ELB with auto scalling. However, I'm not sure whether a single ELB will be enough to handle my architecture.

My current architecture is following:

  • my infrastructure consists of about 50 ec2 micro instances, which server 50 different webapps, one instance per app and those are usually enough,
  • the traffic is routed through an ec2 instance with nginx installed, which directs connections according to anames (let's say that my domain is abc.com and server app1 (1.1.1.1) is responsible for handling app1.abc.com and server app2 is responsible for app2.abc.com),
  • the traffic sometimes spikes, but not periodically or predictably,
  • if the utilization is high enough I manually create another ec2 instance of a server with appx and supply its info into nginx,
  • after the high traffic period I stop/terminate unnecessary instances.

The main problem in this architecture is that I need to manually react to traffic changes. Because of that, I was thinking to take advantage of AWS auto scalling, since I'm using AWS anyway, but after scale out is applied I would still need to manually modify nginx and again after scale in. The solution is to use ELB, which can react to auto scalling, however I'm not sure how it would handle multiple auto scalling groups (one per aname) and multiple anames linked to concrete servers.

If a single ELB with auto scalling is not capable of that and having 50 different ELB for mostly single machines is out of the question, are there any alternatives to achieve this level of automation in such architecture?

Michal
  • 91
  • 4
  • What about leveraging [automated autoscaling notifications](http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/ASGettingNotifications.html) to drive an automated rebuild/reload of the nginx configuration? – Michael - sqlbot Sep 02 '15 at 16:25
  • @Michael-sqlbot That's one of the options I'm considering, but making it reliable and looking after it will be my job. Of course it will be less work than the current solution, but I cannot understand why what I'm doing doesn't fit nicely into the AWS model of services. Is such architecture something uncommon or weird? – Michal Sep 03 '15 at 06:43
  • @Michael-sqlbot Because I didn't get other sugestions, I'm going to follow yours. If you make it an answer, I will accept it. – Michal Sep 06 '15 at 15:54

1 Answers1

0

Have you considered having one external ELB handing off traffic to a number of nginx reverse proxies, and internal ELBs for the nginx proxies to route the apps servers.

Unless you release a new app, the configs on the nginx servers will remain consistant, and you can scale up both the nginx group, and any/all of the apps server groups.

Sirch
  • 5,785
  • 4
  • 20
  • 36
  • Your solution fits great into my architecture, but if I understand it correctly I would need 1 (main lb) + 50 (internal lbs) = 51 lbs, right? – Michal Sep 02 '15 at 11:40
  • @Michal that's correct, and but not exactly cost-effective for the 50 internal ELBs, which will cost more than the micro machines. – Michael - sqlbot Sep 02 '15 at 13:29
  • @Michael-sqlbot I thought so, unfortunately it disqualifies this solution. – Michal Sep 02 '15 at 13:41