I have 2 databases, with ~100,000 rows missing from the table field_collection_item
from db1
, which I'd like to repair by exporting from db2
.
My plan to accomplish this was to:
- identify the missing items by
item_id
indb2
, exporting the list ofitem_id
s. - import the
item_id
s intodb1
into a new tablemissing_field_collection_item
Using the following mysqldump to pull the data:
mysqldump -u USER -pPASS DATABASE --no-create-info --tables field_collection_item --where="item_id IN (SELECT item_id FROM missing_field_collection_item);"
however this gives the error:
Couldn't execute 'SELECT /*!40001 SQL_NO_CACHE */ * FROM `field_collection_item` WHERE item_id IN (SELECT item_id FROM missing_field_collection_item);': Table 'missing_field_collection_item' was not locked with LOCK TABLES (1100)
I would prefer to do this without making changes to db2
however it's not strictly necessary, if it turns out the only realistic way to do this is to drop the rows I don't want and then dump without a where clause.
UPDATE
I discovered the above works simply by adding --single-transaction
, which seems to turn off locking. This should be safe since db2
isn't live, however I'm not confident I understand any side-effects, so I won't accept this as the answer without a second opinion.