1

I have a table that has 60 columns (and thousands of rows) and it references to 15 catalog tables and 3 of those reference to another 3 catalog tables (the whole database is less than 1 GB). These tables are used to perform some "onchange" search query in an web app (what I mean is a user select or write a value in an select/input field on the screen, and automatically the results update until he/she finds what is looking for).

As you can expect, this search query is performed frequently so MySQL struggles to fetch the results (with 10 concurrent users... for now).

I realize that using memory tables (a copy of the original tables) only for this query would be a good option but I never worked on those kinds of tables. I don't know how to manage the insert/update/delete in these tables. Do I have to perform those action twice (one in the memory table and one in the original table)? I don't have a problem to do that but, is there another/better way?

I understand the limitations of memory tables about size (I can change max_heap_table_size) and if system crash (using init_file to repopulate the tables) and I have enough RAM memory (the system use only 6 GB of 16 GB installed). Is there another thing that I have to take in mind in using theses kind of tables?

prueba prueba
  • 652
  • 1
  • 8
  • 26
  • the main problem, is that he data is lost , when a system reboots or somthing happens, so you have always to back the data up, or have a cpoy as regular table And because of murphys law a permanent backup is needed – nbk May 06 '20 at 15:55
  • I can resolve that issue using init_file variable to repopulate the memory tables each time the system reboot. – prueba prueba May 06 '20 at 16:05
  • you miss understood the issue the backup cost time and resources even incremtal backups, so the memory tables are uually not that big, the use of ssd as quick space is limited , because they are that big to begin with. – nbk May 06 '20 at 16:17
  • But the "backup" would be the original table and the "recovery" action would be a simple INSERT-SELECT just when the service starts. In operation, if I write the original and, after that, the memory it shouldn't affect that much because the write operations are performed very sporadically. Or am I wrong? – prueba prueba May 06 '20 at 17:56
  • That is only when you have a statoc table, with out changes, but tables that a highly frequented can be static, butusually they aren't – nbk May 06 '20 at 21:14

1 Answers1

2

If your entire database fits in innodb_buffer_pool_size, the changes are that ENGINE=MEMORY will actually be slower, especially if you have any writes going into it because MEMORY engine has same locking characteristics as MyISAM (table level locking on writes). So in a read-only workload where all the data fits into the buffer pool, InnoDB performance should be similar. In a read-write workload, InnoDB will massively outperform it.

By all means - test it for your specific usage. But don't be surprised to discover that it is actually slower for your workload.

Gordan Bobić
  • 1,748
  • 13
  • 16
  • The full database is 850MB and is installed in HDD not SSD. There is not many write operations (but there is) and normally they are in batch (300-400 rows updated/inserted once every hour) – prueba prueba May 06 '20 at 17:45
  • If you have 1GB of `innodb_buffer_pool_size`, once the caches are hot, nothing will be read from disk anyway. MEMORY engine makes sense when it is used within a single thread for a temporary/sorting operation on a query because it avoids writing anything to disk (and thus consuming any disk I/O), but for your use case, I'm not convinced you will actually see any benefit if you can arrange for `innodb_buffer_pool_size` to cover the entire database. As I said try it, but don't expect it to go any faster because it should all be coming from RAM anyway. – Gordan Bobić May 06 '20 at 18:15
  • That's a good point, I will change that variable and see how it performs. – prueba prueba May 06 '20 at 18:31
  • 2
    It worked, the performance increased substantially. innodb_buffer_pool_size was 8MB, I changed it to 512MB for now. – prueba prueba May 06 '20 at 18:52