3

Reading up on micro services for a few days now I was wondering how do people go about automating the load balancing and scaling these things?

I have a specific scenario in mind what I would like to achieve but not sure if it's possible or maybe I'm thinking about it wrong. So here it goes...


Let's say I have a cluster of 3 CoreOS machines named A,B and C.

First thing I want is transparent deployment for which I can probably use fleet.

Then I would like to detect, when one of the services is under huge load and deploy another instance of it and have that one and the first one deployed, automatically load balanced in a way that would not disrupt other services that are using it (traffic goes through load balancer from now on).

Another way could be that I manually deploy another version of the services which then gets load balanced automatically and traffic router to the load balancer.

Then the last question, how is this all different to something like Akka cluster and how does development of those differ from micro services?

Matjaz Muhic
  • 5,328
  • 2
  • 16
  • 34

1 Answers1

4

In my opinion, In the question you asked, there's a hint to your answer "(traffic goes through load balancer from now on)".

I would say - traffic should always go thru load-balancer.

In your simplest case when you have 1 instance of each service, it still has to go thru load-balancer (btw, I think it's a good idea to have at least 2 of everything).

In that case, when you get 3x more traffic and want to spin up another container of the same service, once container is up and running it must register itself in service discovery tool and automatically update load-balancer config to add new 'upstream' entry.

And then using this approach you will be able to scale up/down your services more easily.

Alex Kurkin
  • 1,069
  • 10
  • 17
  • We are using similar approach with success. Main requirement is that "client should see no errors when node goes down" – Sergey Alaev Jul 20 '15 at 12:26
  • Hm. Now I feel dumb that I didn't think of this myself. :/ Do you have any particular experience doing this? Which load balancer would you recommend? Any drawbacks/benefits for something like vulcand vs elastic load balancer on AWS? Also, would you say having same instances of service on the same machine is pointless? – Matjaz Muhic Jul 20 '15 at 12:49
  • @MatjazMuhic I worked with nginx as lb for multiple services behind it, worked out for my needs quite well (had from 10 to 15 services). Also if you go with nginx/haproxy or some other LB, you'd have to figure out piece of integration between Service Discovery tool and updating load-balancer's configuration (depends on the Service Discovery tool). And looks like it's exactly what vulcand does. – Alex Kurkin Jul 20 '15 at 14:35
  • @MatjazMuhic On the last question - I wouldn't say that having same containers running on the same machine is pointless. Surely there's a point to save cost of operation. On the other side, it doesn't help you at all with providing high-availability of your service (imagine machine dies). – Alex Kurkin Jul 20 '15 at 14:45
  • @thaold yea, I get it. It only helps if you need more work done from that service right? Then you won't spin up another machine just to deploy that one service or would you? – Matjaz Muhic Jul 20 '15 at 15:04