3

I'm having issues tidying up malformed XML code I'm getting back from the SEC's edgar database.

For some reason they have horribly formed xml. Tags that contain any sort of string aren't closed and it can actually contain other xml or html documents inside other tags. Normally I'd had this off to Tidy but that isn't being maintained.

I've tried using Nokogiri::XML::SAX::Parser but that seems to choke because the tags aren't closed. It seems to work alright until it hits the first ending tag and then it doesn't fire off on any more of them. But it is spiting out the right characters.

  class Filing < Nokogiri::XML::SAX::Document
    def start_element name, attrs = []
      puts "starting: #{name}"
    end

    def characters str
      puts "chars: #{str}"
    end

    def end_element name
      puts "ending: #{name}"
    end
  end

It seems like this would be the best option because I can simply have it ignore the other xml or html doc. Also it would make the most sense because some of these documents can get quite large so storing the whole dom in memory would probably not work.

Here are some example files: 1 2 3

I'm starting to think I'll just have to write my own custom parser

hadees
  • 1,754
  • 2
  • 25
  • 36
  • Please define "quite large" when you mean a large file. Most machines these days can easily swallow multi-gigabyte files. – the Tin Man Aug 16 '11 at 03:31

1 Answers1

3

Nokogiri's normal DOM mode is able to automatically fix-up the XML so it is syntactically correct, or a reasonable facsimile of that. It sometimes gets confused and will shift closing tags around, but you can preprocess the file to give it a nudge in the right direction if need be.

I saved the XML #1 out to a document and loaded it:

require 'nokogiri'

doc = ''
File.open('./test.xml') do |fi|
  doc = Nokogiri::XML(fi)
end

puts doc.to_xml

After parsing, you can check the Nokogiri::XML::Document instance's errors method to see what errors were generated, for perverse pleasure.

doc.errors

If using Nokogiri's DOM model isn't good enough, have you considered using XMLLint to preprocess and clean the data, emitting clean XML so the SAX will work? Its --recover option might be of use.

xmllint --recover test.xml

It will output errors on stderr, and the code on stdout, so you can pipe it easily to another file.

As for writing your own parser... why? You have other options available to you, and reinventing a nicely implemented wheel is not a good use of time.

the Tin Man
  • 158,662
  • 42
  • 215
  • 303
  • Neither of those solutions actually work with the example files. They put most of the closing tags at the end. – hadees Aug 16 '11 at 04:41
  • 2
    Which is EXACTLY why I said sometimes YOU have to preprocess the file to give the parser enough information to do things correctly. – the Tin Man Aug 16 '11 at 06:00
  • @hadees neither of those solutions work since the moment they try to recover not well-formed XML, which is explicitly forbidden by the standard. Tools has no way of knowing where to close the tag. – Serabe Aug 17 '11 at 21:56