6

I'm writing an import script that processes a file that has potentially hundreds of thousands of lines (log file). Using a very simple approach (below) took enough time and memory that I felt like it would take out my MBP at any moment, so I killed the process.

#...
File.open(file, 'r') do |f|
  f.each_line do |line|
    # do stuff here to line
  end
end

This file in particular has 642,868 lines:

$ wc -l nginx.log                                                                                                                                        /code/src/myimport
  642868 ../nginx.log

Does anyone know of a more efficient (memory/cpu) way to process each line in this file?

UPDATE

The code inside of the f.each_line from above is simply matching a regex against the line. If the match fails, I add the line to a @skipped array. If it passes, I format the matches into a hash (keyed by the "fields" of the match) and append it to a @results array.

# regex built in `def initialize` (not on each line iteration)
@regex = /(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}) - (.{0})- \[([^\]]+?)\] "(GET|POST|PUT|DELETE) ([^\s]+?) (HTTP\/1\.1)" (\d+) (\d+) "-" "(.*)"/

#... loop lines
match = line.match(@regex)
if match.nil?
  @skipped << line
else
  @results << convert_to_hash(match)
end

I'm completely open to this being an inefficient process. I could make the code inside of convert_to_hash use a precomputed lambda instead of figuring out the computation each time. I guess I just assumed it was the line iteration itself that was the problem, not the per-line code.

localshred
  • 2,244
  • 1
  • 21
  • 33
  • The most memory efficient way is how you're doing it with `each_line`. You could read the file in blocks which is faster, then use `String#lines` to grab individual lines along with rejoining any partially loaded lines that crossed the block boundaries. It becomes a wash having to split out the lines and rejoin broken ones. – the Tin Man Jan 31 '11 at 00:36

3 Answers3

5

I just did a test on a 600,000 line file and it iterated over the file in less than half a second. I'm guessing the slowness is not in the file looping but the line parsing. Can you paste your parse code also?

SLaks
  • 868,454
  • 176
  • 1,908
  • 1,964
  • The only piece of code that has any significance is that I'm matching the line against a semi-complicated regex. The regex doesn't do any backward/forward looking, it's mostly just a char-by-char match. I'll post an update above with the relevant code. – localshred Jan 31 '11 at 04:19
  • Oh, and the regex is computed once, not on each iteration (just to be clear). – localshred Jan 31 '11 at 04:26
  • It appears that it was my foolishness that was causing the memory growth. I was storing the matched results (and also the skipped lines) in arrays that I was using to do db inserts later (or printing the size of the skips). I know, I'm dumb. :) Now I'm just doing a `puts` on the skipped lines and doing the db insert right when the match is valid. The real mem never goes above 30mb. Thanks for pointing out that I was probably just doing things in a dumb way. :) (Oh and I switched to `IO.foreach` like your original answer suggested). – localshred Jan 31 '11 at 04:50
4

This blogpost includes several approaches to parsing large log files. Maybe thats an inspiration. Also have a look at the file-tail gem

hukl
  • 41
  • 1
1

If you are using bash (or similar) you might be able to optimize like this:

In input.rb:

 while x = gets
      # Parse
 end

then in bash:

 cat nginx.log | ruby -n input.rb

The -n flag tells ruby to assume 'while gets(); ... end' loop around your script, which might cause it to do something special to optimize.

You might also want to look into a prewritten solution to the problem, as that will be faster.

Andrew Amis
  • 142
  • 1
  • 4