Hey, I'm wondering what are some general options I should look into for optimizing an nginx server for large file downloading (typically 100mb to 6gb). I just migrated from lighttpd and I'm noticing that during downloads, speeds fluctuate a lot very quickly. I'm familiar with fluctuating speeds, but not at this rate, lighttpd didn't fluctuate nearly as much. I was wondering if there were some general things I should look into, being new to nginx. Should I up the worker pool count, etc.
I was going through the wiki page for the HttpCoreModule and I found something such as the directio
option:
The directive enables use of flags O_DIRECT (FreeBSD, Linux), F_NOCACHE (Mac OS X) or directio() function (Solaris) for reading files with size greater than specified. This directive disables use of sendfile for this request. This directive may be useful for big files
would that be an option to try out? Thanks guys, I appreciate the help.
I know my question may be pretty broad, but like I said, being new to nginx I'm wondering what kind of options I can look towards to optimize the server for file downloading. I know a variety of things play a part, but I also know lighttpd didn't fluctuate as much on the exact same server.
Thanks!