2

I am using Nginx-rtmp-module for live streaming. It works perfectly for 40-50 cameras on a single machine (AWS EC2 C3-large). But if i have more than 100 streams, how can I scale my servers to meet the requirement ?

I have tried using ELB but it terminates the connections once new machine is launched and after launching new machine it sends incoming requests in a round robin manner. What I want is the following.

  1. When System's CPU utilization reaches 80% launch a new server but keep the existing connections alive.
  2. Send new requests to newly created server only if first server's CPU utilization > 80%. (No Round Robin)

How can i achieve this ? Thank you for your time.

Junaid
  • 143
  • 1
  • 1
  • 6
  • This sounds like a load balancing question, but not being an expert I don't know how it would be best set up. I'm guessing a number of real or VM's each talking to 40 cameras (then it is easy to to just copy your VM when you need more cameras. The problem, as I see it is that you probably want to be able to access all the cameras from a single single IP / web address - which sounds like a load balancing or proxying trick... But I wouldn't know how to set this up. Looking forward to the answer. – DaveM May 23 '15 at 17:58

2 Answers2

3

If you are willing to switch over to hls (nginx-rtmp supports hls) it'd make your life - from my experience - easier than trying to load balance rtmp itself. Once you have hls transcoding set up the only thing you need it to either put a cdn in front of your webserver and let that take care of the caching or roll your own using varnish, squid or even nginx yourself (of course there are more possibilities) - HTTP caching is so widespread I'm sure you'll find an easy solution.

If you want to stick with rtmp though, you could set up a similar infrastructure.

Have one master ingest server and multiple edge nodes that each pull from the ingest server. This setup would be fairly scalable and should work fine for your current load.

Edit: Seems I misunderstood your question: It would probably the easiest to have an api endpoint which you can ask which rtmp server your webcam should stream to instead of trying to load balance.

So once your rtmp server has reached X streams (see nginx-rtmp stat module), you launch a new instance and redirect new streams to that.

nginx-rtmp also has a redirect functionality in on_connect (can't put more than two links yet, just search for on_connect on the directives wiki page) by returning a 3xx header with a Location. I am not sure if this supports redirecting to a different node, but this would be worth a try aswell - can avoid having to manually query before picking a server then.

imer
  • 46
  • 4
  • thank you for the answer. I already went for the approach you mentioned (api method). `on_connect` doesn't redirect to remote nodes, and one additional thing is that stat page will go blank if `nginx -s reload` is called. right now I am using `on_update` which tell the api which camera is connected to which server. API resolve the id of next connection – Junaid May 30 '15 at 13:17
-2

I don't know if nginx supports scalability with rtmp module but if you are free to change the server solution you can try our server : Monaserver.

It allows scalability and supports other protocols natively (like RTMFP).

You have a configuration example of scalability with 3 servers here : http://www.monaserver.ovh/scalability.html#exchange-data-and-resources This sample redirect new subscriptions when the server has more than 400 subscribers. If you really prefer to use CPU utilization you can change the following line :

if _nextServer and _subscribers>=400 then error(_nextServer.host) end

with :

if _nextServer and cpu>80 then error(_nextServer.host) end

Then you are free to find the best way for getting CPU load (parameter cpu). If you need to call c++ code take a look at the FFI library (you can embed c++ code into lua scripts without including any other library/plugin)

You can contact us on the forum if you need help.

I hope it will help you!

thomas
  • 99
  • 4