1

Lately we have been having a lot of problems with our mysql server, from websites being really slow or even unable to load them at all. The server is a dedicated server that only runs our mysql database. i have been running some test using a profiler (JetProfiler) and tool to stress test (loadUI).

If I use loadUI to connect with 50 simultaneous connections to one of our websites that runs a resently big query it will already make the website be unable to load. One of the things that makes me worried is that when I look at Jetprofile it always shows a Treads_connected of 1.00 and it seems that when it hits around 2.00 that I'm unable to connect.

enter image description here

The 3 big peaks are when I run a test with loadUI, first one was 15 simultaneous connections wich made it still able for me to load the website but just really slow, the second one was 40 simultaneous connections which already made it impossible to load and the third one was with 100 connection which also didn't make it load anymore.

Another thing that worries me is that in JetProfiler it says all the queries that get used are full table scans, could this maybe be the problem? The website I run as a test runs 3 queries, one for a menu that outputs around 1000 rows, one for the adds that has around 560 rows and a big one to get posts that has around 7000 rows (see screenshot bellow)

enter image description here

I also have monitored the cpu of the server and there seems to be no problem there, even when I make a lot of connections with loadui the cpu stays low.

I can't seem to figure out what is the main cause of the websites being unable to load when there is a high amount of traffic, if anyone has other suggestions for testing or something that might cause the problem please let me know.

Lucas Kauffman
  • 16,880
  • 9
  • 58
  • 93
Holapress
  • 11
  • 1
  • 2
  • How many cores does your server have? If you are consuming more than one core, and you are on a virtual server that is only given one core, then multiple threads will not provide increase performance. – Myrddin Emrys Apr 04 '12 at 08:26
  • Server has 2 cores – Holapress Apr 04 '12 at 08:30
  • How does your database schema look? Do you use indexes? – Tim Apr 04 '12 at 08:36
  • Every table has a primary index and thats it – Holapress Apr 04 '12 at 08:39
  • I started to comment that lack of indexes can be a serious issue (and it can), but that would also spike your CPU. You have indicated your utilization remains low. However, can you clarify your statement about 'menu that outputs 1000 rows'. Do you mean that every page request returns 1000 results for the menu query? This seems excessive... I would expect 10 results, not 1000. Same for the other interactions. Your bottleneck may be throughput (bandwidth). – Myrddin Emrys Apr 04 '12 at 08:47
  • the cms they made here seems to put all the websites menus in to one table and then querys them based on the websites id but sinds all the the querys are full table scans it outpits a 1000 results. would there be a way to monitor my bandwich live while making a lott of conections with the website? – Holapress Apr 04 '12 at 08:54
  • checked the bandwidth of the server and it only uses 5mb max when making a 1000 simultaneous connections – Holapress Apr 05 '12 at 09:36

3 Answers3

0

Another thing that worries me is that in JetProfiler it says all the queries that get used are full table scans, could this maybe be the problem?

Oh Yes!

As I like to say:

To do anything (like reading a database row) takes a finite amount of time.
To do that same thing a thousand times takes at least a thousand times as long.

Databases work [really] well when they have indexes to support the queries that are being run. Indexes allow them to get to the rows needed very, very much faster.

For the query you showed, I would suggest an index on all three of those fields in the where clause - webid, actief and datum - and probably in that order. Have a play; experiment with running this query with real volumes of data and different indexes on it - in your Development / Test database, of course, not in your Production one - and see what works best for your workload.

Phill W.
  • 1,479
  • 7
  • 7
0

Some nice pictures, but there's a lot of info missing from your post.

What's the max connections? How big is the database? What engine? What's the key buffer size? How much free memory does the DBMS have?

full table scans, could this maybe be the problem?

Could be - not enough information to say. If the query needs to read more than approx a tenth of the data, the a full table scan will be faster than an index lookup.

queries, one for a menu that outputs around 1000 rows

You have menus on your web pages with 1000 entries? Then most of your problems are not in the database. Your application is fetching FAR TOO MUCH DATA from the database. If it's then sending this information across internet then performance will be appalling. If it's not sending out all the data it's fetching then why aren't you doing the filtering on the DBMS.

Every table has a primary index and thats it

Maybe you should think about aligning your schema with your queries (i.e. add relevant indexes) however think the biggest problem you've got is that you're fetching data from the DBMS which just isn't getting used.

OTOH do you mean that when you analyse the queries using the same tool as in your screenshot it tells you it's processing thousands of rows? In that case fixing the indexing will have a huge impact.

symcbean
  • 21,009
  • 1
  • 31
  • 52
0

With a slow down occurring when the number of simultaneous connections increases this could be a connection pooling issue. If the server/connection pool is set to only allow a set number of connections (say 5) then later connections will be queued up till there is space in the queue. Also if a single page hit leads to various SQL queries being fired, but each one opens a brand new connection to MySQL (rather than re-using an earlier connection) then this can lead to masses of connection requests being generated.

How you control connection pooling is implementation specific (different languages/libraries do it in different ways) and there is a sweet spot as far as how many connections to the database are kept open. With systems such as Tomcat, Tomcat itself can be set to open connections to the database when it is started and have these available for applications/servlets as and when they are needed. You'd have to profile your app and usage to see what a sensible sweet spot is (as holding open unnecessary connections uses memory).

Otherwise, most scripting languages used for web stuff (as an example) have database libraries that open a connection at the start of the page and then you just reuse that reference for the rest of the page. Worth checking whether this is happening (though you may not have massive control over the code).

I'm basing this off timing issues I've had at work with connection pooling between a Tomcat web app and a proprietary database system. It may not be relevant.

webtoe
  • 1,976
  • 11
  • 12