My mysql cluster: Ver 5.6.30-76.3-56 for debian-linux-gnu on x86_64 (Percona XtraDB Cluster (GPL), Release rel76.3, Revision aa929cb, WSREP version 25.16, wsrep_25.16)
I've a complicated sql query which inserts for about 36k rows into a table with this syntax:
INSERT INTO `sometable` (SELECT ...);
The select is a bit complicated but not slow (0.0023s) but the insert takes about 40-50s. The table is not in use when I'm inserting the rows.
My questions are:
- Can I speed it up somehow?
- The slow insert causes locking problems on the other tables (because of select)
- This workflow is good or bad practice? Is there any better?
Thanks
UPDATE:
The table schema:
CREATE TABLE `sometable` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`user_id` int(11) unsigned DEFAULT NULL,
`a` varchar(255) DEFAULT NULL,
`b` smallint(6) unsigned DEFAULT NULL,
`c` smallint(6) unsigned DEFAULT NULL,
`d` smallint(6) unsigned DEFAULT NULL,
`e` smallint(6) unsigned DEFAULT NULL,
`f` varchar(255) DEFAULT '',
`country_id` int(10) unsigned DEFAULT NULL,
`city_id` int(10) unsigned DEFAULT NULL,
`g` smallint(6) unsigned DEFAULT NULL,
`h` smallint(6) unsigned DEFAULT NULL,
`i` smallint(6) unsigned DEFAULT NULL,
`j` smallint(6) unsigned DEFAULT NULL,
`k` smallint(6) unsigned DEFAULT NULL,
`l` varchar(3) DEFAULT NULL,
`m` varchar(3) DEFAULT NULL,
`n` text,
`o` varchar(255) DEFAULT NULL,
`p` varchar(32) DEFAULT NULL,
`q` varchar(32) DEFAULT NULL,
`r` varchar(32) DEFAULT NULL,
`s` time DEFAULT NULL,
`t` time DEFAULT NULL,
`u` text,
PRIMARY KEY (`id`),
KEY `user_id` (`user_id`),
KEY `country_id` (`country_id`),
KEY `city_id` (`city_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
UPDATE2:
When I try to run the query I get an error in some cases:
ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction
MY SOLUTION:
Here is my final solution if somebody interested in: gist
The main problem was that while I fill mytable
the other queries are stuck and the cluster had serious performance problems. In this solution I create a temporary table and fill it with data in "dirty read" mode, then I copy these data to mytable
in chunks so it takes a bit more time but there is no performance problem and not stuck the queries.