0

I'm currently writing a PHP application that processes large amounts of data and writes them to MySQL. I use Percona MySQL with TokuDB engine but it is difficult to handle. The application currently makes about 400 - 800 inserts / updates per second based on the "INSERT ... ON DUPLICATE KEY UPDATE" query. In the beginning it wasn't able to proceed all these data in-time, so I had to move my Database LOG files to Ramdisk and it is working fine now, but if the amount of data increases slightly - it will not be able to process everything. So I'm wondering - is there a solution other than MySQL ( or even MySQL somehow ) that is compatible with PHP and allows about 1000 inserts / updates per second and support simple queries like "SELECT * FROM fasterDB.fasterTABLE WHERE a = 1 AND b = 2". And also, to allow ~800 selects / per second.

As far as I understand, TokuDB will soon have a new official release that will have a "noar" which will do much faster "INSERT ... ON DUPLICATE KEY UPDATE" but it's still unclear - when it will be available for CentOS7 :-(

mega6382
  • 9,211
  • 17
  • 48
  • 69
the_root
  • 329
  • 3
  • 9
  • Have a look at benchmarks here, https://stackoverflow.com/a/3522938/2394254 it shows around 17000-20000 inserts per second, with mysql. so, whatever problem you are having is not due the DB but your server, I suppose. – mega6382 Oct 10 '17 at 12:38
  • You never specified *how* you do these inserts. If you do these one at a time, you're spending 1 I/O of your HDD. Your constraint will be the I/O capacity of your disk. Judging by what you wrote, I'd assume you have a RAID made out of 4 mechanical drives (or a really fast mechanical HDD). To get significantly more throughput, you should wrap multiple queries in a single transaction, say 10-50 at a time. That way, you'll spend a single I/O for a batch of queries, increasing your throughput. – Mjh Oct 10 '17 at 12:42
  • Hardware is: Google Cloud VM Instance, 100GB SSD, with 8 CPU Cores, 30Gb RAM – the_root Oct 10 '17 at 12:45
  • @mega6382 , Yes, this benchmark is okay for pure Inserts, but what about "INSERT ... ON DUPLICATE KEY UPDATE" , or "SELECT, UPDATE IF EXIST, INSERT IF NOT" scenario ? – the_root Oct 10 '17 at 12:48
  • So you disregarded what I wrote about transactions and grouping your queries, you.. don't care about what that means or? I even asked you how you do these inserts, one at a time or not. – Mjh Oct 10 '17 at 13:19
  • @Mjh , I've tried with pdo transactions , but didn't notice any performance difference. – the_root Oct 10 '17 at 13:31
  • That comment tells no one a single thing, what does it mean you tried "pdo" transactions? How does that imply whether you managed to flush 50 inserts in a single I/O or not? You can use PDO and transactions and still waste 1 I/O per query. To get faster insert, as I mentioned, you **group** queries in a transaction and MySQL instructs the OS to flush the data in a single pass over the disk's sector. By default, every query is its own transaction in MySQL, so you spend 1 I/O per 1 query. Can you post the code you used? – Mjh Oct 10 '17 at 13:35

0 Answers0