4

I'm having some trouble to ALTER a huge (InnoDB) MySQL table. The ID column of the table (which is the primary key) is defined as an UNSIGNED INT but reached it's maximum value (4294967295).
To be able to add more rows to this table, I need to adjust the type of this column to BIGINT. However, a standard MySQL ALTER command (and any other solution I've found so far) will try to generate a new table with the new definitions, and then copy all data to it. For the table in this case, this requires 942.0Gb of free disk space, while I only have 271Gb available (and there is no other partition that does have the required disk space available).

Are there any solutions that do NOT require an entire duplication of the original table (but rather move the data to the new tables, or something similar)?
I don't care about not being able to access the table while it's being altered, the table can be completely locked for several hours with no problem (I can't use it at this moment anyway).

user1834095
  • 5,332
  • 2
  • 20
  • 38
  • Buy more disk space. If the table were originally partitioned, you would be able to do the work partition by partition. But because you don't have enough space, this is going to be a challenge. – Gordon Linoff Jun 27 '17 at 12:25

2 Answers2

4

Since you have 271Gb of free disk space and you are ok if table is not being accessed for few hours, follow below steps:

  1. Create a new table lets say tbl_temp with ID as BIGINT keeping remaining table structure as exact same.
  2. Create a simple process ( any server side scripting language you use ) that will select a row from original table and insert it into tbl_temp.
  3. Delete the row inserted from original table.
  4. When all rows are inserted, your original table will be empty. Delete original table
  5. Rename tbl_temp back to original table.

This way you will be able to migrate entire data with existing disk space.

Samir Selia
  • 7,007
  • 2
  • 11
  • 30
1

I accepted this answer from Samir, as it is the best solution to my opinion, but I solved the problem in a slightly different way. What I didn't think of at first was that we have an AWS account and (CLI) access to S3. So, I did the following:

  1. Made a mysqldump of all the data in the (original) table and streamed it directly to S3 (since I didn't have much diskspace required to store the dump locally).
    mysqldump -f --no-create-info --lock-tables db_name table_name | gzip -c | aws s3 cp - s3://bucket-name/mysqldump.sql.gz --expected-size 130608821125
  2. Copied the create statement of the original table to a notepad (I used notepad++)
  3. DROP the original table (yes, entirely, to make place for a new one)
  4. CREATE the new table, with exactly the same name as the original one, using an updated CREATE statement (implementing the new BIGINT definition I needed)
  5. Filled the new table with all data in the dump created earlier:
    aws s3 cp s3://bucket-name/mysqldump.sql.gz - | gzip -d | mysql db_name

The advantage of this method over Samir's answer is that this is less error-prone, since there are no scripts to write to make this work.
The disadvantage is that (I think) it takes longer to complete the whole process, because of the additional compressing, decompressing and network transfer. Completing the whole thing took roughly 5 days in my case, while I think Samir's solution should be faster, which is why I accepted it.

user1834095
  • 5,332
  • 2
  • 20
  • 38