2

I am trying to import a large MySQL dump file and am getting a consistent error on a certain line 149,351 (out of 4207442). I would like to see what that line is.

The best way I have found to do this is like this:

head -149351 dump.sql | tail

However, this is very slow. I tried loading the file in vi, but it wasn't able to handle a file that big. I am on Windows XP and have cygwin.

womble
  • 96,255
  • 29
  • 175
  • 230
Jeremy French
  • 675
  • 3
  • 12
  • 25

9 Answers9

5
sed -n '149351p' dump.sql

might be slightly faster than head/tail combinations (but maybe not.) Vartec is correct; there is no quicker way than reading at least the first 149351 lines.

bmb
  • 443
  • 4
  • 12
4

You can see the individual line with the following command:

tail -n+<line number> <file>|head -n1

in your case: tail -n+149351 dump.sql|head -n1

That command tails the file starting on line number 149351, and uses the head command to only display the first line of the tail results.

Brent
  • 22,857
  • 19
  • 70
  • 102
3

I would recommend using the split command to break that huge dump into more manageable pieces:

split -l20000 mysql.dump mysql.dump.

Will create files with names mysql.dump.aa, mysql.dump.ab, .... Each file will contain 20000 lines - editing tools should be able to handle those small files easily!

Once you've fixed the problem, recombine them easily:

cat mysql.dump.* > mysql.dumptest

or

cat mysql.dump.* | mysqlimportcommand
MikeyB
  • 39,291
  • 10
  • 105
  • 189
  • 1
    the split will have to scan whole file for line endings, so unless you're doing repeated searches on same file, you gain nothing. – vartec May 08 '09 at 16:22
  • Well, you gain abilities to see it in full context as well to parse it *once* then work with it more later. It's very possible that there are more errors in the file. If this turns out to be the case, he'll now be able to go right to the spot where the next error is without having to reparse the whole thing. – MikeyB May 08 '09 at 16:46
  • Shell globbing provides 0 guarantees that mysql.dump.* will return in a sorted order (although it is _likely_ that they will return in the order created). If you want to use this method, its a good idea to sort the names of the files before cat'ing them – Dave Cheney May 09 '09 at 14:41
  • Fair enough - I'm used to bash, which will guarantee an alphabetic sort of a glob replacement. – MikeyB May 10 '09 at 05:18
2

I usually just fire up TextPad, even in 1GB files.

Ctrl+G is the Go To menu in which you can choose line number.

Dani
  • 1,226
  • 1
  • 13
  • 20
  • He is trying to do that on Linux (*NIX) box. Doing what you recommend implies a transfer of the file to a Windows machine. –  May 08 '09 at 15:49
  • Actually it is windows with cygwin. – Jeremy French May 08 '09 at 15:55
  • I second this idea. There are several very capable text editors which can open multi GB files quickly. TextPad is one. Personally I use the now defunct CodeWright. Same difference. – Simon Gillbee May 08 '09 at 16:21
2

If the lines are of variable lengths, there really is no quicker way, then scanning through first 149351 (which is exactly what you do with "head").

vartec
  • 6,217
  • 2
  • 33
  • 49
2

Surprised no one suggested:

grep -n 149351 dump.sql
HTTP500
  • 4,833
  • 4
  • 23
  • 31
2

Since its windows I'm surprised no one has suggested

get-content -readcount 0 | select-object -index 149350

Jim B
  • 24,081
  • 4
  • 36
  • 60
1

Surely in vi you can set the line number using:

:set number

and then go to line 149351 using:

:149351

alternatively you can start vi at a specific line number using:

vi +36 blah

hope that helps...

Jonathan Holloway
  • 365
  • 1
  • 4
  • 10
1

Open it with nano (with -w), and once open hit CTRL _ and enter the line number to go to.