8

oI have a table with 2 millions of registers, but it will grow much more soon. Basically this table contains points of interest of an image with respective descriptors. When I'm trying to execute query that selects points that are spatially near to the query points, total execution time takes too long. More precisely Duration / Fetch = 0.484 sec / 27.441 sec. And the query is quite simple, which returns only ~17000 rows.

My query is:

SELECT fp.fingerprint_id, fp.coord_x, fp.coord_y, fp.angle,
fp.desc1, fp.desc2, fp.desc3, fp.desc4, fp.desc5, fp.desc6, fp.desc7, fp.desc8, fp.desc9, fp.desc10,
fp.desc11, fp.desc12, fp.desc13, fp.desc14, fp.desc15, fp.desc16, fp.desc17, fp.desc18, fp.desc19,
fp.desc20, fp.desc21, fp.desc22, fp.desc23, fp.desc24, fp.desc25, fp.desc26, fp.desc27, fp.desc28,
fp.desc29, fp.desc30, fp.desc31, fp.desc32
FROM fingerprint fp 
WHERE 
fp.is_strong_point = 1 AND 
(coord_x BETWEEN 193-40 AND 193+40) AND (coord_y BETWEEN 49-15 AND 49+15 ) 
LIMIT 1,1000000;

That is what I've done.

  1. I've tried to change key_buffer_size in my.ini, but didn't see much changes.
  2. In addition I've tried to set coord_x and coord_y as indexes, but query time became slower.
  3. The table is partitioned by range of coord_x field, which gave me better results.

How I can reduce the Fetch time? Is it possible to reduce it to milliseconds?

andriy
  • 4,074
  • 9
  • 44
  • 71

5 Answers5

18

I faced slow fetch issue too (MySQL, InnoDB). Finally I found that innodb_buffer_pool_size is set to 8MB by default for my system which is not enough to handle the query. After increasing it to 1GB performance seems fine:

                    Duration / Fetch
353 row(s) returned 34.422 sec / 125.797 sec (8MB innodb buffer)
353 row(s) returned 0.500 sec / 1.297 sec (1GB innodb buffer)

UPDATE:

To change innodb_buffer_pool_size add this to your my.cnf

innodb_buffer_pool_size=1G

restart your mysql to make it effect

Reference: How to change value for innodb_buffer_pool_size in MySQL on Mac OS?

Community
  • 1
  • 1
andy
  • 508
  • 5
  • 7
0

If i am right the query is really fast but what is slow is the fetching of the data from your database. It takes 27 seconds to load the 170000 results from your storage.

It looks like you use the wrong database type. Try switching the table from one database engine to another.

For maximum speed you can use the MEMORY engine. The only drawback would be that you would have to store a copy of that table in another engine if you have to do dynamic changes to it and after any change you would have to reload the differences or the entire table.

Also you would have to make a script that fires when you restart your server so that your memory table would be loaded on startup of your mysql server

See here for the doc

ITroubs
  • 11,094
  • 4
  • 27
  • 25
  • I know that in Oracle it should be faster, but I must use MySQL in this project. I was thinking that there could be some tricks in MySQL configurations? Thanks for your answer. – andriy Mar 21 '13 at 12:21
  • @Andriy I hope that you also know that MySQL can also use more than one DB Engine. MyISAM and innodb for instance... – ITroubs Mar 21 '13 at 12:22
  • @Andriy if you are using a tool like phpmyadmin then you will be able to see what kind of database engine your table is using. – ITroubs Mar 21 '13 at 12:24
  • Ok, it is configured to default-storage-engine=INNODB, I'll try to use MyISAM. – andriy Mar 21 '13 at 12:24
  • You also could use a MEMORY table but as far as i just read you would have to reload these memory tables when the server reboots. But this table would then be stored in your memory and access+retreival would be much faster – ITroubs Mar 21 '13 at 12:28
  • @Andriy i updated my answer to explain the MEMORY engine a bit further – ITroubs Mar 21 '13 at 12:34
  • After changing engine the Duration / Fetch become to 0.187 sec / 27.877 sec, which is not much faster. I've already though about loading all table in o the memory, but the table is updating frequently. – andriy Mar 21 '13 at 12:34
  • haha well execution and calculation of the query was 3 times faster. retreival still works from your harddrive since the data is stored on your HD. what HD are you using? Could it be that the server is a virtual server thus having a stronger bottlneck behaviour? – ITroubs Mar 21 '13 at 12:36
  • I'm using standard Toshiba hardrive, because I'm working on my local machine. – andriy Mar 21 '13 at 12:47
  • I've experimented another thing: created a view where fp.is_strong_point = 1, and executed query in this view. It returned the same 17k registers but with completely different times: Duration / Fetch = 28.361 sec / 0.063 sec. So now fetching is fast and duration no. Do you have any idea why? – andriy Mar 21 '13 at 15:12
  • 1
    views are just some kind of aliass and you mask the fetching of your rows in the duration. So what your view basically does is it executes, fetches the stuff and then applies your "select... from view" and this select is faster in case of fetching because everything is already in your memory because the view fetched it. – ITroubs Mar 21 '13 at 15:14
  • i think the best way would be to look for an efficient storing engine or to store your data in two tables where one is a persistant enginge and the other one is a MEMORY engine. then you would have to somehow sync both and you would have to fill the MEMORY table every time you restart your server but fetching stuff from your MEMROY engine would (should) be superfast.... – ITroubs Mar 21 '13 at 15:19
  • when I'm trying to fill the MEMORY table I'm getting this error `Error Code: 1114. The table 'fingerprint' is full`. It means that there is not enough memory on my machine? – andriy Mar 21 '13 at 15:48
  • No it just means that the standard tablesize of i think 16MB is reached. you have to change that in your mysql config. you can read more about the limitations of the MEMORY engine in the documentation in the link of my answer – ITroubs Mar 21 '13 at 15:50
  • Tank you very much! Changing table to MEMORY engine and setting max-heap-table-size to 512MB solved my main problem. The query time was reduced to 0.093 sec / 0.312 sec! Now I'll investigate how to deal with insertions, but it is another challenge. – andriy Mar 21 '13 at 18:39
  • Remember that MEMORY tables are not persistant! After the reboot of your mysql server all data is gone! The easiest way would be to keep your data in two tables and always do two inserts. Or if possible you could make an trigger on insert of your MEMORY table so that this trigger also inputs your data into the second table or something like that. – ITroubs Mar 21 '13 at 20:01
-1

Increasing my buffer size make myquery faster. But you need to open the my.ini file as notepad++ because it will some hex data if you open it as notepad.

akdjrdb
  • 13
  • 3
-2

I found a Fix, Just disable your AVG or any antivuris in your system and then restart your workbench

-2

Make sure that the line is not written in your pom.xml.

<property name="hbm2ddl.auto">create</property>

If it is written than remove it.

  • You should refer what does the line mentioned do, in order to make the asker and also the community able to improve their knowledge – xKobalt Mar 05 '20 at 13:55