0

I have a problem that probably MANY people have, and it's servers that are going "unresponsive" over the time

The problem: HOW can I figure it out how much a server can handle, I never did studies on the capacity of the servers, load balancing depending on that, and so on, so I need ideas , know how can I get that info

Possible way to solve it: One thing that I though was monitoring the servers (with a solution like naigos probably)

Kind of software running: I'm running a download service that downloads first some files with Aria2 (a download manager controlable through a web service), the server serves them through nginx. (the files are completely owned by the users do they won't be requested at the same time by many users, so in theory no load-balancing needed for serving )

I've put a limit to 50 files downloading simultaneously , then 25..which is pretty LOW for a i3 server, and it even remains 0.0% idle cpu usage, and very very low memory available for 4GB like 200 mb or less...

I'm running some servers with transmission(a torrent download manager) and I don't have these problems and supposedly torrents are more process intensive, resource and connections "destructive".

I don't have any idea what's the load on the server just by seeing the graphs on the hosting provider. To get an ideea the maximum that we had: Upload:10.51 mb/s
Download:58.82 mb/s not more than 300 400 connections.. 8core, 6x 1tb hd , that's running 3 VMS with XEN..

Even nginx breaks...people are saying that they can't get their files downloaded they interrupt.. then we saw that sometimes nginx process just shuts down..

we have different servers..with xen, or without,..they all have the same behavior...

ah ..I almost forgot...we had centos then we switched to gentoo...

we tried wget before, it was even more terible, I was thinking about jdownloader but that neeeds a X11 enviroment..(that will take more resources), and I haven't found any other download manager that knows rpc/rest/soap calls.

is there any Linux Guru that can tell me where to start and see how can i discover the server limit, and make the system stable? Thank you guys,

PartySoft
  • 217
  • 1
  • 7
  • 12

1 Answers1

1

Some type of monitoring is really necessary here, without it you are just guessing. I'd suggest Munin rather then nagios. Nagios really isn't going to tell you what is wrong, it's just going to tell you your services are down. That really won't be a whole lot of help to you.

Nginx can handle the speeds you mentioned without breaking a sweat. It's unlikely to be your problem.

When you say it has a very low memory available, what exactly are you looking at? If you are just looking at the output of 'free -m', note that linux will try to use any "free" memory as disk cache. If any process on the machine actually needs this memory, the disk cache will be freed for the actual process. You don't want to disable disk caching, it's probably not hurting anything.

How many processes do you end up with running at any given time? Linux tends to not be happy if you've got over 150 active processes at once.

There's really no simple answer here, you need some sort of monitoring to tell you what's going on. I suggest you get into the habit of never deploying a production machine without monitoring. It's that important (and easy to install with munin).

devicenull
  • 5,622
  • 1
  • 26
  • 31