1

I have a simple table, which is using InnoDB:

tag_id int(20), primary
tag varchar(50)

There are only 106 tags in the table and sometimes this simple select query is taking 10s, 16s, 30s or more:

# Query_time: 26  Lock_time: 0  Rows_sent: 106  Rows_examined: 106
use database;
SELECT `tag`
FROM (`tags`);

My question: is there any way to optimize this query (so it wouldn't take 26s to complete) or is this a clear sign for Mysql server overload? Will I solve this problem if I upgrade from shared hosting to VPS?

Pyerro
  • 13
  • 3
  • have you tried doing an EXPLAIN (EXPLAIN SELECT tag FROM tags)? That should give a little more information on how the query is being executed. Also, does the query actually take that long when you run it manually? – superultranova Jan 24 '15 at 15:05
  • @superultranova If I run the query manually I get normal results, which are like 30.000x faster (cca. 0,0007s or something like that). It seems this long query times happen only occasionally in intervals that last only a few minutes. If I run EXPLAIN, I get this, but I don't know what to do with it: `'id' => string '1' (length=1) 'select_type' => string 'SIMPLE' (length=6) 'table' => string 'tags' (length=7) 'type' => string 'ALL' (length=3) 'possible_keys' => null 'key' => null 'key_len' => null 'ref' => null 'rows' => string '106' (length=3) 'Extra' => string '' (length=0)` – Pyerro Jan 24 '15 at 22:37

1 Answers1

0

It might be faster if you force it to use the primary key instead of doing a full table scan. Try doing SELECT tag FROM tags USE INDEX(PRIMARY) if you are using innodb. Alternatively, you could also just add WHERE tag_id > 0 to your query. From my understanding, innodb will do a range scan, which is more costly, as opposed to an index scan, if no index is used in the query. If you force it to use an index, it will scan the index instead to find all of the rows of the table, which is probably going to be faster. There may be more at play here though, I'm not as well versed in the mysql/innodb internals as I used to be.

If you aren't, then I would guess the bottleneck is somewhere else (likely HDD I/O). Upgrading to a different server isn't the only solution in that case (although it may solve the problem). If this table isn't constantly changing (i.e. changing every couple of seconds), it might be worthwhile to use some kind of memory caching mechanism, such as memchached (there are others as well). If you are running into I/O issues, using a memory cache for data from this table and/or others might be worth looking into. You may find a host with faster disk I/O, but no matter how you slice it, reads and writes to the disk are expensive. It may be worthwhile to come up with some kind of caching procedure.

superultranova
  • 1,294
  • 8
  • 14
  • In my manual tests I get query speed from 0.0003 to 0.0005. If I USE INDEX(PRIMARY) nothing really changes. This is just a small table with 106 rows and it will maybe grow in next year to 110 rows. I will try to implement somekind of caching or just upgrade to better hosting. – Pyerro Jan 25 '15 at 11:06
  • That's pretty typical, judging command speeds from what you see in the monitor can be problematic. The time it takes to execute the query under load may be a completely different story, and there may be a big difference in speed between an index scan and a range scan if the file I/O is saturated. If the table doesn't change much, a memory cache will serve you well though. – superultranova Jan 28 '15 at 03:43