1

I'm trying to wean myself from BeautifulSoup, which I love but seems to be (aggressively) unsupported. I'm trying to work with html5lib and lxml, but I can't seem to figure out how to use the "find" and "findall" operators.

By looking at the docs for html5lib, I came up with this for a test program:

import cStringIO

f = cStringIO.StringIO()
f.write("""
  <html>
    <body>
      <table>
       <tr>
          <td>one</td>
          <td>1</td>
       </tr>
       <tr>
          <td>two</td>
          <td>2</td
       </tr>
      </table>
    </body>
  </html>
  """)
f.seek(0)

import html5lib
from html5lib import treebuilders
from lxml import etree  # why?

parser = html5lib.HTMLParser(tree=treebuilders.getTreeBuilder("lxml"))
etree_document = parser.parse(f)

root = etree_document.getroot()

root.find(".//tr")

But this returns None. I noticed that if I do a etree.tostring(root) I get all my data back, but all my tags are prefaced by html (e.g. <html:table>). But root.find(".//html:tr") throws a KeyError.

Can someone put me back on the right track?

Paul D. Waite
  • 96,640
  • 56
  • 199
  • 270
Chris Curvey
  • 9,738
  • 10
  • 48
  • 70
  • 1
    BeautifulSoup is not completely unsupported. Here the author explains the problems: http://www.crummy.com/software/BeautifulSoup/3.1-problems.html – leoluk Sep 12 '10 at 19:40

5 Answers5

6

You can turn off namespaces with this command: etree_document = html5lib.parse(t, treebuilder="lxml", namespaceHTMLElements=False)

gsnedders
  • 5,532
  • 2
  • 30
  • 41
Chris
  • 493
  • 3
  • 11
  • 1
    Note this doesn't actually totally generate no namespaces — `` will still generate a `{http://www.w3.org/2000/svg}svg` node, just HTML elements won't be put in the HTML namespace. – gsnedders Jul 18 '15 at 10:29
5

In general, use lxml.html for HTML. Then you don't need to worry about generating your own parser & worrying about namespaces.

>>> import lxml.html as l
>>> doc = """
...    <html><body>
...    <table>
...      <tr>
...        <td>one</td>
...        <td>1</td>
...      </tr>
...      <tr>
...        <td>two</td>
...        <td>2</td
...      </tr>
...    </table>
...    </body></html>"""
>>> doc = l.document_fromstring(doc)
>>> doc.finall('.//tr')
[<Element tr at ...>, <Element tr at ...>] #doctest: +ELLIPSIS

FYI, lxml.html also allows you to use CSS selectors, which I find is an easier syntax.

>>> doc.cssselect('tr')
[<Element tr at ...>, <Element tr at ...>] #doctest: +ELLIPSIS
Tim McNamara
  • 18,019
  • 4
  • 52
  • 83
  • great! Now I just need to find the method in lxml to parse a file (instead of having to read a file into a string) and I'm all set. – Chris Curvey Sep 14 '10 at 22:55
  • ah, found the parse-a-file at http://stackoverflow.com/questions/3569152/parsing-html-with-lxml – Chris Curvey Sep 14 '10 at 23:00
  • Am glad to see you've been successful! Actually, it's simpler than that example: `lxml.html.parse` can take a URL, file name or file-like object as its argument. One gotcha though is that the function returns a tree, not the root element. Use `lxml.html.parse(file).get_root()` to get the root node. – Tim McNamara Sep 15 '10 at 20:39
3

It appears that using the "lxml" html5lib TreeBuilder causes html5lib to build the tree in the XHTML namespace -- which makes sense, as lxml is an XML library, and XHTML is how one represents HTML as XML. You can use lxml's qname syntax with the find() method to do something like:

root.find('.//{http://www.w3.org/1999/xhtml}tr')

Or you can use lxml's full XPath functions to do something like:

root.xpath('.//html:tr', namespaces={'html': 'http://www.w3.org/1999/xhtml'})

The lxml documentation has more information on how it uses XML namespaces.

llasram
  • 4,417
  • 28
  • 28
  • The HTML specification requires HTML elements be put in the HTML namespace, so the behaviour is merely following what the spec says. – gsnedders Jul 18 '15 at 10:30
2

I realize that this is an old question, but I came here in a quest for information I didn't find in any other one place. I was trying to scrape something with BeautifulSoup but it was choking on some chunky html. The default html parser is apparently less loose than some others that are available. One often preferred parser is lxml, which I believe produces the same parsing as expected for browsers. BeautifulSoup allows you to specify lxml as the source parser, but using it requires a little bit of work.

First, you need html5lib AND you must also install lxml. While html5lib is prepared to use lxml (and some other libraries), the two do not come packaged together. [for Windows users, even though I don't like fussing with Win dependencies to the extent that I usually get libraries by making a copy in the same directory as my project, I strongly recommend using pip for this; pretty painless; I think you need administrator access.]

Then you need to write something like this:

import urllib2
from bs4 import BeautifulSoup
import html5lib
from html5lib import sanitizer
from html5lib import treebuilders
from lxml import etree

url = 'http://...'

content = urllib2.urlopen(url)
parser = html5lib.HTMLParser(tokenizer=sanitizer.HTMLSanitizer,
                             tree=treebuilders.getTreeBuilder("lxml"),
                             namespaceHTMLElements=False)
htmlData = parser.parse(content)
htmlStr = etree.tostring(htmlData)

soup = BeautifulSoup(htmlStr, "lxml")

Then enjoy your beautiful soup!

Note the namespaceHTMLElements=false option on the parser. This is important because lxml is intended for XML as opposed to just HTML. Because of that, it will label all the tags it provides as belonging to the HTML namespace. The tags will look like (for example)

<html:li>

and BeautifulSoup will not work well.

ETB
  • 21
  • 2
0

Try:

root.find('.//{http://www.w3.org/1999/xhtml}tr')

You have to specify the namespace rather than the namespace prefix (html:tr). For more information, see the lxml docs, particularly the section:

ars
  • 120,335
  • 23
  • 147
  • 134