10

I'm running MySql 5.0.22 and have a really unwieldy table containing approximately 5 million rows.

Some, but not all rows are referenced by a foreign key to another table.

All attempts to cull the unreferenced rows have failed so far, resulting in lock-timeouts every time.

Copying the rows I want to an alternate table also failed with lock-timeout.

Suspiciously, even a statement that should finish instantaneously like the one below will also fail with "lock timeout":

DELETE FROM mytable WHERE uid_pk = 1 LIMIT 1;

...it's at this point that I've run out of ideas.

Edit: For what it's worth, I've been working through this on my dev system, so only I am actually using the database at this moment so there shouldn't be any locking going on outside of the SQL I'm running.

Any MySql gurus out there have suggestions on how to tame this rogue table?

Edit #2: As requested, the table structure:

CREATE TABLE `tunknowncustomer` (
  `UID_PK` int(11) NOT NULL auto_increment,
  `UNKNOWNCUSTOMERGUID` varchar(36) NOT NULL,
  `CREATIONDATE` datetime NOT NULL,
  `EMAIL` varchar(100) default NULL,
  `CUSTOMERUID` int(11) default NULL,
  PRIMARY KEY  (`UID_PK`),
  KEY `IUNKNOWCUST_CUID` (`CUSTOMERUID`),
  KEY `IUNKNOWCUST_UCGUID` (`UNKNOWNCUSTOMERGUID`),
  CONSTRAINT `tunknowncustomer_ibfk_1` FOREIGN KEY (`CUSTOMERUID`) REFERENCES `tcustomer` (`UID_PK`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8$$

Note, attempting to drop the FK also times out.

Rocjoe
  • 459
  • 4
  • 13

2 Answers2

5

I had the same problem with an innodb table. optimize table corrected it.

Albert
  • 51
  • 1
  • 2
1

Ok, I finally found an approach that worked to trim the unwanted rows from my large InnoDB table! Here's how I did it:

  1. Stopped using MySQL Workbench (they have a hard-coded execution timeout of 30 seconds)
  2. Opened a command prompt
  3. Renamed the "full" table using ALTER TABLE
  4. Created an empty table using the original table name and structure
  5. Rebooted MySQL
  6. Turned OFF 'autocommit' with SET AUTOCOMMIT = 0
  7. Deleted a limited number of rows at a time, ramping up my limit after each success
  8. Did a COMMIT; in between delete statements since turning off autocommit really left me inside of one large transaction

The whole effort looked somewhat like this:

ALTER TABLE `ep411`.`tunknowncustomer` RENAME TO  `ep411`.`tunknowncustomer2`;

...strange enough, renaming the table was the only ALTER TABLE command that would finish right away.

delimiter $$

CREATE TABLE `tunknowncustomer` (
    ...
 ) ENGINE=InnoDB DEFAULT CHARSET=utf8$$

...then a reboot just in case my previous failed attempts could block any new work done...

SET AUTOCOMMIT = 0;

delete from tunknowncustomer2 where customeruid is null limit 1000;

delete from tunknowncustomer2 where customeruid is null limit 100000;

commit;

delete from tunknowncustomer2 where customeruid is null limit 1000000;

delete from tunknowncustomer2 where customeruid is null limit 1000000;

commit;

...Once I got into deleting 100k at a time InnoDB's execution time dropped with each successful command. I assume InnoDB starts doing read-aheads on large scans. Doing commits would reset the read-ahead data, so I spaced out the COMMITs to every 2 million rows until the job was done.

I wrapped-up the task by copying the remaining rows into my "empty" clone table, then dropping the old (renamed) table.

Not a graceful solution, and it doesn't address any reasons why deleting even a single row from a large table should fail, but at least I got the result I was looking for!

Rocjoe
  • 459
  • 4
  • 13