91

I'm running a report in MySQL. One of the queries involves inserting a large amount of rows into a temp table. When I try to run it, I get this error:

Error code 1206: The number of locks exceeds the lock table size.

The queries in question are:

create temporary table SkusBought(
customerNum int(11),
sku int(11),
typedesc char(25),
key `customerNum` (customerNum)
)ENGINE=InnoDB DEFAULT CHARSET=latin1;
insert into skusBought
select t1.* from
    (select customer, sku, typedesc from transactiondatatransit
    where (cat = 150 or cat = 151)
    AND daysfrom07jan1 > 731
group by customer, sku
union
select customer, sku, typedesc from transactiondatadelaware
    where (cat = 150 or cat = 151)
    AND daysfrom07jan1 > 731
group by customer, sku
union
select customer, sku, typedesc from transactiondataprestige
    where (cat = 150 or cat = 151)
    AND daysfrom07jan1 > 731
group by customer, sku) t1
join
(select customernum from topThreetransit group by customernum) t2
on t1.customer = t2.customernum;

I've read that changing the configuration file to increase the buffer pool size will help, but that does nothing. What would be the way to fix this, either as a temporary workaround or a permanent fix?

EDIT: changed part of the query. Shouldn't affect it, but I did a find-replace all and didn't realize it screwed that up. Doesn't affect the question.

EDIT 2: Added typedesc to t1. I changed it in the query but not here.

BenMorel
  • 34,448
  • 50
  • 182
  • 322
maxman92
  • 983
  • 1
  • 6
  • 5
  • Why are you grouping in the sub-selects? – Tim Aug 01 '11 at 18:54
  • I find this hard to understand. If t2.customernum = t1.customer it doesn't make sense to select only customernum from topThreetransit. Surely SkusBought.typedesc is then the same customer code as the first column? – RedGrittyBrick Aug 03 '11 at 08:53
  • t2 is a subset of customers in t1. The join is to get rid of customers in t1 that aren't in t2. The code for typedesc is actually incorrect. Again, changed it in the actual sql script but not in here. Typedesc is another column of transactiondata (all three of them). I'll change it so it's right and it makes more sense. – maxman92 Aug 03 '11 at 13:09
  • I wrote up a quirky illustration of a common problem [here](http://stackoverflow.com/a/38282283) – Drew Jul 09 '16 at 14:22
  • 1
    Possible duplicate of ["The total number of locks exceeds the lock table size" Deleting 267 Records](http://stackoverflow.com/questions/10253482/the-total-number-of-locks-exceeds-the-lock-table-size-deleting-267-records) – user3338098 Dec 05 '16 at 21:25
  • did you resolve the issue ? – Pranav MS Aug 04 '17 at 06:41
  • `SET GLOBAL innodb_buffer_pool_size=13421772800;` – conor Nov 30 '21 at 14:12

14 Answers14

81

This issue can be resolved by setting the higher values for the MySQL variable innodb_buffer_pool_size. The default value for innodb_buffer_pool_size will be 8,388,608.

To change the settings value for innodb_buffer_pool_size please see the below set.

  1. Locate the file my.cnf from the server. For Linux servers this will be mostly at /etc/my.cnf
  2. Add the line innodb_buffer_pool_size=64MB to this file
  3. Restart the MySQL server

To restart the MySQL server, you can use anyone of the below 2 options:

  1. service mysqld restart
  2. /etc/init.d/mysqld restart

Reference The total number of locks exceeds the lock table size

PHP Bugs
  • 1,133
  • 12
  • 23
  • 1
    The trick is to find and modify the right my.cnf file. On my mac I have at least 20 files with this name, stale from previous versions, OS, etc. The my.cnf file that worked for me once modified as suggested was: /usr/local/mysql-5.6.23-osx10.8-x86_64/my.cnf – Toren May 09 '19 at 20:10
  • 4
    in Wamp is my.ini – Enrique Dec 11 '19 at 21:35
  • 2
    Note the minimum these days is 128MB, so 64MB won't work, you need to give something bigger than 128MB – Noam Rathaus Jul 21 '20 at 10:14
  • 19
    From MySQL 5.7.5 you can simply run `SET GLOBAL innodb_buffer_pool_size=268435456;` without searching my.cnf file and restarting mysql. https://stackoverflow.com/a/38333056/3553564 – Klim Sep 08 '20 at 09:56
24

I found another way to solve it - use Table Lock. Sure, it can be unappropriate for your application - if you need to update table at same time.

See: Try using LOCK TABLES to lock the entire table, instead of the default action of InnoDB's MVCC row-level locking. If I'm not mistaken, the "lock table" is referring to the InnoDB internal structure storing row and version identifiers for the MVCC implementation with a bit identifying the row is being modified in a statement, and with a table of 60 million rows, probably exceeds the memory allocated to it. The LOCK TABLES command should alleviate this problem by setting a table-level lock instead of row-level:

SET @@AUTOCOMMIT=0;
LOCK TABLES avgvol WRITE, volume READ;
INSERT INTO avgvol(date,vol)
SELECT date,avg(vol) FROM volume
GROUP BY date;
UNLOCK TABLES;

Jay Pipes, Community Relations Manager, North America, MySQL Inc.

Archie
  • 6,391
  • 4
  • 36
  • 44
  • I followed the instructions in this answer and experienced no improvement at all. I'm running version 5.6. – mbmast Feb 11 '15 at 17:25
  • 7
    This solution didn't work for me, either, using Mysql 5.1. Maybe this is no longer works for more recent versions? – frances Apr 20 '15 at 16:39
  • 2
    In my case the isolation level was not important and i changed it to READ UNCOMMITED. That resolved the problem with the locks. But it depends on your use case. SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; – itstata Dec 12 '17 at 08:47
  • 1
    Locking that table doesn't seem to stop the rows from being locked individually. So the lock table size is still exceeded. – B. Bogart Jul 01 '21 at 15:33
16

From the MySQL documentation (that you already have read as I see):

1206 (ER_LOCK_TABLE_FULL)

The total number of locks exceeds the lock table size. To avoid this error, increase the value of innodb_buffer_pool_size. Within an individual application, a workaround may be to break a large operation into smaller pieces. For example, if the error occurs for a large INSERT, perform several smaller INSERT operations.

If increasing innodb_buffer_pool_size doesnt help, then just follow the indication on the bolded part and split up your INSERT into 3. Skip the UNIONs and make 3 INSERTs, each with a JOIN to the topThreetransit table.

Community
  • 1
  • 1
MicSim
  • 26,265
  • 16
  • 90
  • 133
9

First, you can use sql command show global variables like 'innodb_buffer%'; to check the buffer size.

Solution is find your my.cnf file and add,

[mysqld]
innodb_buffer_pool_size=1G # depends on your data and machine

DO NOT forget to add [mysqld], otherwise, it won't work.

In my case, ubuntu 16.04, my.cnf is located under the folder /etc/mysql/.

Bowen Xu
  • 3,836
  • 1
  • 23
  • 25
6

I am running MySQL windows with MySQL workbench. Go to Server > Server status At the top it says configuration file: "path" (C:\ProgramData\MySQL\...\my.ini)

Then in the file "my.ini" press control+F and find buffer_pool_size. Set the value higher, I would recommend 64 MB (default is 8 MB).

Restart the server by going to Instance>Startup/Shutdown > Stop server (and then later start server again)

In my case I could not delete entries from my table.

dejoma
  • 394
  • 1
  • 6
  • 18
5

Fixing Error code 1206: The number of locks exceeds the lock table size.

In my case, I work with MySQL Workbench (5.6.17) running on Windows with WampServer 2.5.

For Windows/WampServer you have to edit the my.ini file (not the my.cnf file)

To locate this file go to Menu Server/Server Status (in MySQL Workbench) and look under Server Directories/ Base Directory

MySQL Server - Server Status

In my.ini file there are defined sections for different settings, look for section [mysqld] (create it if it does not exist) and add the command: innodb_buffer_pool_size=4G

[mysqld]
innodb_buffer_pool_size=4G

The size of the buffer_pool file will depend on your specific machine, in most cases, 2G or 4G will fix the problem.

Remember to restart the server so it takes the new configuration, it corrected the problem for me.

Hope it helps!

Efrain Plaza
  • 423
  • 1
  • 6
  • 13
4

Same issue I'm getting in my MYSQL while running sql script Please look into below image.. Error code 1206: The number of locks exceeds the lock table size Picture

This is Mysql configuration issue so I made some changes in my.ini and It's working on my system & issue resolved.

We need to make some changes in my.ini which is available on following Path:- C:\ProgramData\MySQL\MySQL Server 5.7\my.ini and please update following changes in my.ini config file fields:-

key_buffer_size=64M
read_buffer_size=64M
read_rnd_buffer_size=128M
innodb_log_buffer_size=10M
innodb_buffer_pool_size=256M
query_cache_type=2
max_allowed_packet=16M

After all above changes please restart the MYSQL Service. Please refer the image:- Microsoft MYSQL Service Picture

2

in windows: if you have mysql workbench. Go to server status. find the location of running server file in my case it was:

C:\ProgramData\MySQL\MySQL Server 5.7

open my.ini file and find the buffer_pool_size. Set the value high. default value is 8M. This is how i fixed this problem

Ch HaXam
  • 499
  • 3
  • 16
2

If you have properly structured your tables so that each contains relatively unique values, then the less intensive way to do this would be to do 3 separate insert-into statements, 1 for each table, with the join-filter in place for each insert -

INSERT INTO SkusBought...

SELECT t1.customer, t1.SKU, t1.TypeDesc
FROM transactiondatatransit AS T1
LEFT OUTER JOIN topThreetransit AS T2
ON t1.customer = t2.customernum
WHERE T2.customernum IS NOT NULL

Repeat this for the other two tables - copy/paste is a fine method, simply change the FROM table name. ** IF you are trying to prevent duplicated entries in your SkusBought table you can add the following join code in each section prior to the WHERE clause.

LEFT OUTER JOIN SkusBought AS T3
ON  t1.customer = t3.customer
AND t1.sku = t3.sku

-and then the last line of WHERE clause-

AND t3.customer IS NULL

Your initial code is using a number of sub-queries, and the UNION statement can be expensive as it will first create its own temporary table to populate the data from the three separate sources before inserting into the table you want ALONG with running another sub-query to filter results.

1

This answer below does not directly answer the OP's question. However, I'm adding this answer here because this page is the first result when you Google "The total number of locks exceeds the lock table size".


If the query you are running is parsing an entire table that spans millions of rows, you can try a while loop instead of changing limits in the configuration.

The while look will break it into pieces. Below is an example looping over an indexed column that is DATETIME.

# Drop
DROP TABLE IF EXISTS
new_table;

# Create (we will add keys later)
CREATE TABLE
new_table
(
    num INT(11),
    row_id VARCHAR(255),
    row_value VARCHAR(255),
    row_date DATETIME
);

# Change the delimimter
DELIMITER //

# Create procedure
CREATE PROCEDURE do_repeat(IN current_loop_date DATETIME)
BEGIN

    # Loops WEEK by WEEK until NOW(). Change WEEK to something shorter like DAY if you still get the lock errors like.
    WHILE current_loop_date <= NOW() DO

        # Do something
        INSERT INTO
            user_behavior_search_tagged_keyword_statistics_with_type
            (
                num,
                row_id,
                row_value,
                row_date
            )
        SELECT
            # Do something interesting here
            num,
            row_id,
            row_value,
            row_date
        FROM
            old_table
        WHERE
            row_date >= current_loop_date AND
            row_date < current_loop_date + INTERVAL 1 WEEK;

        # Increment
        SET current_loop_date = current_loop_date + INTERVAL 1 WEEK;

    END WHILE;

END//

# Run
CALL do_repeat('2017-01-01');

# Cleanup
DROP PROCEDURE IF EXISTS do_repeat//

# Change the delimimter back
DELIMITER ;

# Add keys
ALTER TABLE
    new_table
MODIFY COLUMN
    num int(11) NOT NULL,
ADD PRIMARY KEY
    (num),
ADD KEY
    row_id (row_id) USING BTREE,
ADD KEY
    row_date (row_date) USING BTREE;

You can also adapt it to loop over the "num" column if your table doesn't use a date.

Hope this helps someone!

Joshua Pinter
  • 45,245
  • 23
  • 243
  • 245
Joseph Shih
  • 1,244
  • 13
  • 25
0

It is worth saying that the figure used for this setting is in BYTES - found that out the hard way!

Antony
  • 3,875
  • 30
  • 32
0

Good Day,

I have had the same error when trying to remove millions of rows from a MySQL table.

My resolution was nothing to do with changing the configuration file of MySQL, but just to reduce the number of rows I am targeting by specifying the max id per transaction. Instead of targeting all the rows with 1 transaction, I would suggest targeting portions per transaction. It might take more transactions to get the job done, but atleast you will get somewhere other than trying to fiddle around with MySQL configurations.

Example:

delete from myTable where groupKey in ('any1', 'any2') and id < 400000;

and not

delete from myTable where groupKey in ('any1', 'any2');

The query might still be optimized by using groupBy and orderBy clauses.

Dharman
  • 30,962
  • 25
  • 85
  • 135
Mini-Man
  • 140
  • 7
0

I was facing a similar problem when I was trying to insert some million rows into a database using python. The solution is to group these inserts together into smaller chunks to reduce memory usage, using the executemany function to insert each chuck and then committing simultaneously instead of executing one commit in the end.

def insert(query, items, conn):
    GROUPS = 10
    total = items.shape[0]
    group = total // GROUPS
    items = list(items.itertuples(name=None, index=None))
    for i in range(GROUPS):
        cursor.executemany(query, items[group * i : group * (i + 1)])
        conn.commit()
        print('#', end='')
    print()

There's also a neat progress bar in the implementation above.

0

It helps to insert/update rows in batches.

To insert rows from one table to another:

CREATE TABLE `table_from` (
    `id` int NOT NULL AUTO_INCREMENT PRIMARY KEY,
    `foo` VARCHAR(10),
    `bar` VARCHAR(10),
    `baz` VARCHAR(10)
);

INSERT INTO `table_from` VALUES
(NULL, 'test1', 'test2', 'test3'),
(NULL, 'test4', 'test5', 'test6'),
(NULL, 'test7', 'test8', 'test9');

CREATE TABLE `table_to` (
    `id` int NOT NULL AUTO_INCREMENT PRIMARY KEY,
    `foo` VARCHAR(10),
    `bar` VARCHAR(10),
    `baz` VARCHAR(10)
);

DELIMITER $$
CREATE PROCEDURE `insert_rows_into_table`()
BEGIN
    DECLARE `start` INT DEFAULT 0;
    DECLARE `batch_size` INT DEFAULT 100000;
    DECLARE `total_rows` INT DEFAULT 0;

    -- Get the total row count.
    SELECT COUNT(*) INTO `total_rows` FROM `table_from`;

    -- Process rows in batches.
    WHILE `start` < `total_rows` DO
        START TRANSACTION;

        INSERT INTO `table_to` (`foo`, `bar`, `baz`)
        WITH `rows_to_insert` AS
        (
            SELECT `foo`, `bar`, `baz`
            FROM `table_from`
            LIMIT `start`, `batch_size`
        )
        SELECT `foo`, `bar`, `baz` FROM `rows_to_insert`;

        COMMIT;

        SET `start` = `start` + `batch_size`;
    END WHILE;
END $$
DELIMITER ;

CALL `insert_rows_into_table`();

SELECT * FROM `table_to`;

DB Fiddle

To update all rows:

CREATE TABLE `my_table` (
    `id` int NOT NULL AUTO_INCREMENT PRIMARY KEY,
    `foo` VARCHAR(10),
    `bar` VARCHAR(10),
    `baz` VARCHAR(10)
);

INSERT INTO `my_table` VALUES
(NULL, 'test1', 'test2', 'test3'),
(NULL, 'test4', 'test5', 'test6'),
(NULL, 'test7', 'test8', 'test9');

DELIMITER $$
CREATE PROCEDURE `update_column_in_table`()
BEGIN
    DECLARE `start` INT DEFAULT 0;
    DECLARE `batch_size` INT DEFAULT 100000;
    DECLARE `total_rows` INT DEFAULT 0;

    -- Get the total row count.
    SELECT COUNT(*) INTO `total_rows` FROM `my_table`;

    -- Process rows in batches.
    WHILE `start` < `total_rows` DO
        START TRANSACTION;

        WITH `rows_to_update` AS
        (
            SELECT `id`
            FROM `my_table`
            LIMIT `start`, `batch_size`
        )
        UPDATE `my_table`
        JOIN `rows_to_update` ON `my_table`.`id` = `rows_to_update`.`id`
        SET
            `my_table`.`foo` = UPPER(`my_table`.`foo`),
            `my_table`.`bar` = REVERSE(`my_table`.`bar`),
            `my_table`.`baz` = REPEAT(`my_table`.`baz`, 2)
        WHERE `my_table`.`id` = `rows_to_update`.`id`;

        COMMIT;

        SET `start` = `start` + `batch_size`;
    END WHILE;
END $$
DELIMITER ;

CALL `update_column_in_table`();

SELECT * FROM `my_table`;

DB Fiddle

Tyler
  • 161
  • 1
  • 11