0

I am planning on setting up streaming server.

Streaming is done kind of like YT does it; there will be a large number of audio content and each user can listen to it right on the website. In order to accommodate a large number of users concurrently, I will be using a small cluster of servers.

Since I will be using NGINX, it will be responsible for allocating load on each server equally (fair load balancing since each server will have identical specs). Each server is connected at 1 Gbps.

But since I am on a tight budget, I can't have too many servers so I am planning on having a dynamic throttle (each song will have a known constant bitrate). Eg. If 1 user connects, whole bandwidth is available to him. Another connects and the bandwidth is split in half and so on and so for. So that every user is given an equal amount of bandwidth. (By bandwidth, I mean speed at which user streams music to his computer eg. 300kb/s)

Throttling should continue until max threshold value is reached (1. At that point NGINX should be redirecting requests to other server or in case everything has reached its cap, return 503 error message.

Solution needs to be server based as I will be renting dedicated boxes from datacenter and won't have access to network equipment (firewalling will be done via server too). I have looked at what NGINX, squidproxy and HAPROXY offers.

NGINX and HAPROXY seems to limit only the number of requests from user, not the speed at which they download. Squidproxy seems to limit speed however it is statically defined eg. I can limit each user at 640kb/s but if there is only 1 user currently connected, the rest of pipe is left to idle.

I kept looking for solution for 2 weeks already but can't find what I am looking for. Perhaps, I don't know the proper terminology too.

slm
  • 7,615
  • 16
  • 56
  • 76
user31412
  • 29
  • 1
  • 3

2 Answers2

3

The solution is to use Traffic shaping (specifically minimum bandwidth guarantee) and effective load balancing - hwever this is beyond rocket science - I know a lot of people who get paid a lot of money as network experts who don't understand traffic shaping properly. You're not going to learn how to do it from reading some answers here and some online tutorials.

Each server is connected at 1 Gbps. But since I am on a tight budget,

You can afford multiple GBps internet conenctivity but you can't afford a lot of servers? I think one of us is confused about what you are trying to say here.

I suspect the big problem here is premature optimization.

symcbean
  • 21,009
  • 1
  • 31
  • 52
  • Sorry for confusion. Each server will be connected at 1Gbps. I will explain my concern, I have 1 server at 1000mbps. Say we have a number of users connecting to server at 50mbps. Then I can have a total of 20 concurrent users. Another example, each user connects at 2mbps to the same server. Then I can have a total of 500 concurrent users. My aim is to squeeze the most out of each server as for my application, allowing the client to stream beyond certain speed will result in waste of bandwidth as no significant improvements will be noticed. – user31412 May 14 '13 at 16:29
  • Would you suggest lighthttpd over nginx ? It appears to offer better traffic shaping. I am fully aware that traffic shaping is not an easy thing to do. However, I do not require any sophisticated setups, I am simply looking at rate limiting so that bandwidth of each server is used effectively. – user31412 May 15 '13 at 12:55
  • i would suggest trying it. But one thing you need to remember is that if you could stream at 1Gbps you would need one hell of a harddrive/raid setup that could sustain constant streaming requests to saturate a 1Gbps connection. The best way forward would be to run the setup as default and benchmark how many connections you can afford to stream to. You application might only end up allowing 100 concurrent connections. You need a benchmark, then you need to start to tweak and introduce optimisations. – Danie May 16 '13 at 12:29
  • 1
    Again: I suspect the big problem here is premature optimization. – symcbean May 16 '13 at 14:46
  • By premature optimization, do you mean, I am trying to optimize too early ? – user31412 May 17 '13 at 14:38
  • @user31412 : yes. Build it, Run it, Measure it. Don't try to fix hypothetical problems. – symcbean May 17 '13 at 14:46
  • This is extremely bad approach. "Failure to plan is planning to fail". This is a small business venture I am doing and I am going to invest substantial (to me) amount of money, so I need extra reassurance on all parts of the process. I also need to provide a decent quality level of service and not "first come, first served" basis" where a small number of people with fast internet can eat the most of the bandwidth leaving other people with painfully slow (and unattractive) service. – user31412 May 19 '13 at 08:08
1

Hi Maybe you should try and use another good web server called, lighttpd

Read this blog entry: Rate Limiting

Danie
  • 1,360
  • 10
  • 12
  • Thanks for the link however I am using NGINX, not lighthttpd. – user31412 May 14 '13 at 16:34
  • Is it a way to limit incoming traffic within a websocket/TCP connection? My application will broadcast video stream to the client, but do not want malicious big incoming traffic. – George Y Jun 30 '21 at 07:44