OK, I know I'm asking a question that's been asked multiple times before (e.g. here - Serve millions of concurrent connections and static files?) but there appears to be no concrete solutions, so I would like to ask again; please be patient with me.
We could use nginx for this, but we're using Apache for many reasons (e.g. familiarity with Apache, keeping stack consistent, log formatting, etc).
We are trying to serve a large number of concurrent requests for static files using Apache. This should be simple and straightforward, especially since the static files are small images, but Apache doesn't seem to be handling this well.
More specifically, Apache seems to be falling over on an Amazon EC2 m1.medium box (4 GB RAM + 1 core with 2 hyper-threads) when Apache sees close to 100 concurrent (simultaneous) requests/sec. (The box itself appears to be handling more connections at this time - netstat -n | grep :80 | grep SYN | wc -l shows 250+ connections.)
The biggest issue is that requests for the static content sometimes take 5-10 seconds to get fulfilled, which is causing a bad user experience for our end users.
We are not RAM/memory constrained - running free -m shows the following:
total used free shared buffers cached
Mem: 3754 1374 2380 0 139 332
-/+ buffers/cache: 902 2851
Swap: 0 0 0
Can we optimize Apache further so that it is able to handle more simultaneous connections and serve up the static content quicker? Would having more RAM or CPU help (even though those seem to be under-utilized.)
Or is there maybe some other entirely different problem that we are missing?