3

Hey, I'm wondering what are some general options I should look into for optimizing an nginx server for large file downloading (typically 100mb to 6gb). I just migrated from lighttpd and I'm noticing that during downloads, speeds fluctuate a lot very quickly. I'm familiar with fluctuating speeds, but not at this rate, lighttpd didn't fluctuate nearly as much. I was wondering if there were some general things I should look into, being new to nginx. Should I up the worker pool count, etc.

I was going through the wiki page for the HttpCoreModule and I found something such as the directio option:

The directive enables use of flags O_DIRECT (FreeBSD, Linux), F_NOCACHE (Mac OS X) or directio() function (Solaris) for reading files with size greater than specified. This directive disables use of sendfile for this request. This directive may be useful for big files

would that be an option to try out? Thanks guys, I appreciate the help.

I know my question may be pretty broad, but like I said, being new to nginx I'm wondering what kind of options I can look towards to optimize the server for file downloading. I know a variety of things play a part, but I also know lighttpd didn't fluctuate as much on the exact same server.

Thanks!

1 Answers1

2

How much ram do you have? What kind of workload in your server? Does is serve only big files, or serving smaller files and/or proxying requests as well?

DirectIO is usefull when set of active files is larger then RAM, so they won't fit in cache, and caching them is useless - it's better to read them directly from disk and leave cache for something else.

As for flucations - this is probably caused by nginx workers locking on disk operations (by default they are synchronous). Try increasing number of workers or try using async i/o (aio on). But be carefull: too asyncronous io, or large number of workers might cause much bigger seek ratio, so overall speed might decrease dramatically.

rvs
  • 4,125
  • 1
  • 27
  • 31
  • Thanks! I have 4 gigs of ram. there's a light interface to it which is a rails app, but mainly it serves as a data repository. workload isn't too heavy, aside from transferring fairly large files, it's not overly active or anything, used by about ten people, at least one of which downloads at least one (usually two) such files daily. so I'm not necessarily looking to handle a ton of simultaneous downloads to a huge file, I'd just like to optimize the transfer of a large file. perhaps you can give additional advice with this information? – Jorge Israel Peña Mar 31 '11 at 09:49
  • Ok, then you might try to set number of workers to max number of simultaneous downloads (+1 for dynamic), so each download might be handled by separate worker. This should help to avoid blocking workers on disk i/o. Also, use directio for files larger then 500m to avoid useless disk cache poisoning. – rvs Mar 31 '11 at 11:02
  • so directio 500m; thanks man I'll try this. – Jorge Israel Peña Mar 31 '11 at 22:51