2

I'm looking to do some rudimentary cleansing of HTML. Basically want to create a whitelist of tags that are allowed and reject anything else.

Is Hpricot worth it in this case? Does it have a feature that I've overlooked that will save me from rewriting the wheel? Or is it best to just write a whitelist of tags using regex and massage an HTML document through that?

Regex can get really tricky with HTML, and I know a lot of experts are strictly against it - I'm just looking for the path of least resistance.

randombits
  • 47,058
  • 76
  • 251
  • 433
  • The path of least resistance can seem to be regex at first, and, for very simple tasks with content you control, it can be a safe thing to use. Once your needs get more sophisticated or you are parsing content you can't control regex can fail more and more. I've written spiders, page analyzers and feed aggregators and there are some horribly malformed pages out there, and a good parser can ease the pain of working with a huge amount of badness. – the Tin Man Apr 05 '11 at 04:31

1 Answers1

8

The path of least resistance may seem to be a regex at first, but then as you feed more text through it, you realize that it breaks again and again and makes more work for you. That is why experienced programmers know to use XML/DOM parsers for such a common problem.

I recommend that you use Nokogiri and not Hpricot though because it is faster and better maintained.

https://github.com/rgrove/sanitize/

Sanitize uses Nokogiri to do exactly what you are doing.

Michael Papile
  • 6,836
  • 30
  • 30