Hey, I currently have over 300+ qps on my mysql. There is roughly 12000 UIP a day / no cron on fairly heavy PHP websites. I know it's pretty hard to judge if is it ok without seeing the website but do you think that it is a total overkill? What is your experience? If I optimize the scripts, do you think that I would be able to get substantially lower of qps? I mean if I get to 200 qps that won't help me much. Thanks
-
I'm not familiar with what *UIP* stands for. Could you expand on that? – Jim Rubenstein May 07 '11 at 21:00
-
How can you lower the number of queries against your database? Are you doing unnecessary SELECTs for the sake of it? :P – Max May 07 '11 at 21:00
-
1UIP - unique visitors per day. – Kraketit May 07 '11 at 21:04
-
I can optimize the queries, make them more efficient but it takes time and knowledge that I don't have atm. – Kraketit May 07 '11 at 21:05
4 Answers
currently have over 300+ qps on my mysql
Your website can run on a Via C3, good for you !
do you think that it is a total overkill?
That depends if it's
- 1 page/s doing 300 queries, yeah you got a problem.
- 30-60 pages/s doing 5-10 queries each, then you got no problem.
12000 UIP a day
We had a site with 50-60.000, and it ran on a Via C3 (your toaster is a datacenter compared to that crap server) but the torrent tracker used about 50% of the cpu, so only half of that tiny cpu was available to the website, which never seemed to use any significant fraction of it anyway.
What is your experience?
If you want to know if you are going to kill your server, or if your website is optimizized, the following has close to zero information content :
- UIP (unless you get facebook-like numbers)
- queries/s (unless you're above 10.000) (I've seen a cheap dual core blast 20.000 qps using postgres)
But the following is extremely important :
- dynamic pages/second served
- number of queries per page
- time duration of each query (ALL OF THEM)
- server architecture
- vmstat, iostat outputs
- database logs
- webserver logs
- database's own slow_query, lock, and IO logs and statistics
You're not focusing on the right metric...

- 11,123
- 3
- 27
- 27
-
+1 this is very insightful information. Synthetic benchmarks only tell you how much your stack can serve *under optimal conditions*. Our web server handles 25k+ reqs/sec when dealing with <100kb static content, but drops to ~2.5k reqs/sec when the content has to be generated and even lower if the page hits the DB a lot. – Morten Jensen Sep 14 '12 at 09:24
I think you are missing the point here. If 300+ qps are too much heavily depends on the website itself, on the users per second that visit the website, that the background scripts that are concurrently running, and so on. You should be able to test and/or compute an average query throughput for your server, to understand if 300+ qps are fair or not. And, by the way, it depends on what these queries are asking for (a couple of fields, or large amount of binary data?).
Surely, if you optimize the scripts and/or reduce the number of queries, you can lower the load on the database, but without having specific data we cannot properly answer your question. To lower a 300+ qps load to under 200 qps, you should on average lower your total queries by at least 1/3rd.

- 5,531
- 7
- 38
- 76
Optimizing a script can do wonders. I've taken scripts that took 3 minutes before to .5 seconds after simply by optimizing how the calls were made to the server. That is an extreme situation, of course. I would focus mainly on minimizing the number of queries by combining them if possible. Maybe get creative with your queries to include more information in each hit.
And going from 300 to 200 qps is actually a huge improvement. That's a 33% drop in traffic to your server... that's significant.

- 2,053
- 11
- 17
-
I agree, it is a significant difference but it doesn't solve the problem. I would have to upgrade later on anyways. – Kraketit May 07 '11 at 21:05
-
Consider moving more content to static data unless it is constantly changing. Then, you have a cron job or something that updates those files to represent the latest data as it changes. Then you're serving static HTML instead of grabbing every piece of data from the database... in the end, fewer queries. – Mikecito May 07 '11 at 21:25
-
Well, I did that but one of my website has hundreds of thousands of pages in different languages so most of the content is actually loaded quite rarely so caching didn't help as much as I had hoped for. – Kraketit May 07 '11 at 21:27
You should not focus on the script, focus on the server.
You are not saying if these 300+ querys are causing issues. If your server is not dead, no reason to lower the amount. And if you have already done optimization, you should focus on the server. Upgrade it or buy more servers.

- 341
- 1
- 11
-
Well, my hosting company is forcing me to either upgrade or optimize. The question is if I optimize, will it make a huge difference? Because if not, I might as well upgrade now and deal with optimization later. However, if the optimization would make a huge difference, I wouldn't have to upgrade and I could save a LOT of money. – Kraketit May 07 '11 at 21:03
-
Drop from 200 to 300 is not going to do much difference. If you could get it down to 50 querys per second, it would be easier. It seems that upgrading is the easy way. If host gives time to optimize, try, if not, upgrade. – Lauri May 07 '11 at 21:32