I'm trying to optimize a single core 1GB ram Digital Ocean VPS to handle more requests per second. After some tweaking (workers/gzip etc.) it serves about 15 requests per second. I don't have anything to compare it with but I think this number can be higher.
The stack works like this:
VPS -> Docker container -> nginx (ssl) -> Varnish -> nginx -> uwsgi (Django)
I'm aware of the fact that this is a long chain and that Docker might cause some overhead. However, almost all requests can be handled by Varnish.
These are my tests results:
ab -kc 100 -n 1000 https://mydomain | grep 'Requests per second'
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Requests per second: 18.87 [#/sec] (mean)
I have actually 3 questions:
- Am I correct that 18.87 requests per second is low?
- For a simple Varnished Django blog app, what would be an adequate value (indication)?
- I already applied the recommended tweaks (tuned for my system) from this tutorial. What can I tweak more and how do I figure out the bottlenecks.