79

So I can export only a table like this:

mysqldump -u root -p db_name table_name > table_name.sql

Is there any way to export only a portion of a table with mysqldump? For example, 0 - 1,000,000 rows, 1,000,000 - 2,000,000 rows, etc.

Should I do this with mysqldump or a query?

shobhit
  • 702
  • 2
  • 9
  • 21
datasn.io
  • 12,564
  • 28
  • 113
  • 154

7 Answers7

158
mysqldump -uroot -p db_name table_name --where='id<1000000'

or you can use

SELECT * INTO OUTFILE 'data_path.sql' from table where id<100000
Frederic Klein
  • 2,846
  • 3
  • 21
  • 37
neocanable
  • 5,293
  • 2
  • 23
  • 28
  • 11
    If using this for incremental backup of any sort, be sure to include `--skip-create-options` and possibly `--skip-add-drop-table` just to be safe. Otherwise you will delete all rows from any pre-existing table. – ebyrob Sep 18 '18 at 20:13
  • 2
    You forgot to mention `-t` and `> dump_file_name.dump`, so it should be like; `mysqldump -t -uroot -p db_name table_name -w"id<1000000" > file_name.dump` – Anum Sheraz Dec 18 '18 at 13:15
  • the first solution seems not allowing to join tables and only dump a specific one? 2nd one works with joining tables and dumping specific table. But the output can't be dumped back to SQL directly. Anyway to get around? Thanks. – HCSF Mar 06 '19 at 04:01
  • 2
    If you need to use quotes in your where clause, just use double quotes: `--where='name="foo"'` – Daan Jan 10 '20 at 12:05
6
mysqldump --skip-triggers --compact --no-create-info --user=USER --password=PASSWORD -B DATABASE --tables MY_TABLE --where='SOME_COLUMN>=xxxx' > out.sql
cwallenpoole
  • 79,954
  • 26
  • 128
  • 166
noisex
  • 61
  • 1
  • 1
2

The file dumped is different from the file you use SQL select. For the 2nd approach, you can not simply use: mysql database < table to dump the table into a database.

user2755358
  • 109
  • 2
  • 4
1

The question is current as ever, most people will find these sort of questions because they suffer from the single-threaded design of mysql and mysqldump.
If you have millions or billions of rows exporting can take days (to weeks) so you end up only exporting parts of the data instead.

A quick hack to solve this is to export portions of the data, this works best if you have a numeric key (like an autoincrement id).
Below is a linux/unix example on how to export a table rougly 20-100 times faster than normal.

Assumed column "id" is from 1 to 10000000
Assumed cpu has 16 threads
Assumed disk is an ssd or nvme
seq 0 1000 | xargs -n1 -P16 -I{} | mysqldump -h localhost --password=PASSWORD --single-transaction DATABASE TABLE --where "id > {}*10000 AND id < {}*10000+10000" -r output.{}

The above code will run 16 threads, roughly cutting time to export to 1/10 of normal. It creates 16 files that also can be loaded in parallel which speeds up loading up to 10 times.
On a strong server I use up to 150 parallel threads, this depends on the type of disk and cpu you are running.
This method, a bit refined, can cut the loading or export of a 1 week export to a few hours.

When doing this over network --compress can help a lot, also ignore insert statements will help with faulty mysql indexes that are not avoidable on large data. loading data with 'mysql -f' further helps to avoid stopping in such cases.

P.S. never use the mysql options to add indexes and keys at the end on large tables.

John
  • 7,507
  • 3
  • 52
  • 52
0
mysqldump -uroot -p db_name table_name --where'id<1000000' > yourdumpname.sql
  • 1
    Rolled back you change to the accepted answer because: the documentation uses the `=` for the where option http://dev.mysql.com/doc/refman/8.0/en/mysqldump.html#option_mysqldump_where (same in 5.7 http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_where). Alternatives are `--where 'condition'` or `--w'condition'` but your current version `--where'condition'` gives `mysqldump: unkown option '--wherecondition'`. If there are version specific differences, please specify the conditions. – Frederic Klein Nov 17 '16 at 08:38
0

Below query is to select from id range you can use date_created or any instead of id

mysqldump --opt --host=dns --user=user_name --password=your_passwd db_name --tables table_name  --where "id > 1 and id < 100 " > /file_name.sql

ex: --where="date_created > '2019-01-18' " --> insted of id

sachin_ur
  • 2,375
  • 14
  • 27
0

In my case i have execute this:

SELECT * 
  INTO OUTFILE 'C:\Documents and Settings\Anton.Zarkov\Desktop\joomla_export\data_AZ.sql'
  FROM `jos_glossary`
 WHERE id>6000
  • there is no syntax error - the query passes through.
    1. The result is NULL - no rows were written. (I'm sure - the last ID is 6458)
    2. If I repeat the query an error occurs => #1086 - File 'C:Documents and SettingsAnton.ZarkovDesktopjoomla_exportdata_AZ.sql' already exists
    3. Unfortunately I cannot find the "existing" file anywhere on disk C. Where is it?

The conditions are: phpMyAdmin SQL Dump; version 3.4.5; host: localhost; server version: 5.5.16; PHP version: 5.3.8

Yaroslav
  • 6,476
  • 10
  • 48
  • 89
  • This is because you need to write path with double slash: 'C:\Documents and Settings\\Anton.Zarkov\\Desktop\\joomla_export\\data_AZ.sql' – Oleksandr H Mar 26 '14 at 14:21