I am going to go out and guess that these are static files and you are not passing them through a cgi.
From my experience in profiling, and googling profiling, Its all about finding the bottleneck, or optimizing the areas which take the most time, not spending all your effort to speed up the process which takes 5% of your time.
I'd like to know more about your setup.
What is the response time for one file?
What is the return trip time, for a ping?
How big are the files?
for example if a ping takes 150ms, your problem is your network, not your nginx conf.
If files are in the megabytes, its not nginx.
If the response time differs between 1 - 30 requests per second, I would assume that something more intense than finer nginx tweaks.
Can you shed any more light on the situation?
-- update --
I did a benchmark on my out of the box nginx server getting a typical index.php page.
When benchmarked from inside the server:
roderick@anon-webserver:~$ ab -r -n 1000 -c 100 http://anon.com/index.php
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking anon.com (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software: nginx/0.8.54
Server Hostname: anon.com
Server Port: 80
Document Path: /index.php
Document Length: 185 bytes
Concurrency Level: 100
Time taken for tests: 0.923 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Non-2xx responses: 1000
Total transferred: 380000 bytes
HTML transferred: 185000 bytes
Requests per second: 1083.19 [#/sec] (mean)
Time per request: 92.320 [ms] (mean)
Time per request: 0.923 [ms] (mean, across all concurrent requests)
Transfer rate: 401.96 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 2 4 1.6 4 9
Processing: 1 43 147.6 4 833
Waiting: 1 41 144.4 3 833
Total: 4 47 148.4 8 842
Percentage of the requests served within a certain time (ms)
50% 8
66% 8
75% 9
80% 9
90% 13
95% 443
98% 653
99% 654
100% 842 (longest request)
When benchmarked from my home desktop:
roderick@Rod-Dev:~$ ab -r -n 1000 -c 100 http://anon.com/index.php
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking anon.com (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software: nginx/0.8.54
Server Hostname: anon.com
Server Port: 80
Document Path: /index.php
Document Length: 185 bytes
Concurrency Level: 100
Time taken for tests: 6.391 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Non-2xx responses: 1000
Total transferred: 380000 bytes
HTML transferred: 185000 bytes
Requests per second: 156.48 [#/sec] (mean)
Time per request: 639.063 [ms] (mean)
Time per request: 6.391 [ms] (mean, across all concurrent requests)
Transfer rate: 58.07 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 40 260 606.9 137 3175
Processing: 81 214 221.7 140 3028
Waiting: 81 214 221.6 140 3028
Total: 120 474 688.5 277 6171
Percentage of the requests served within a certain time (ms)
50% 277
66% 308
75% 316
80% 322
90% 753
95% 867
98% 3327
99% 3729
100% 6171 (longest request)
My OS is linux, my cpu is 3 years old (it was a $500 server).
I have done absolutley nothing to the config file.
What does this tell me? nginx is not the problem.
Either your server's network blows or AWS is limiting your CPU. I would probably guess both.
If the fix is that important, I would get a dedicated server. But thats only as far as my knowledge goes.