16

I am brand new to python, and I need some help with the syntax for finding and iterating through html tags using lxml. Here are the use-cases I am dealing with:

HTML file is fairly well formed (but not perfect). Has multiple tables on screen, one containing a set of search results, and one each for a header and footer. Each result row contains a link for the search result detail.

  1. I need to find the middle table with the search result rows (this one I was able to figure out):

        self.mySearchTables = self.mySearchTree.findall(".//table")
        self.myResultRows = self.mySearchTables[1].findall(".//tr")
    
  2. I need to find the links contained in this table (this is where I'm getting stuck):

        for searchRow in self.myResultRows:
            searchLink = patentRow.findall(".//a")
    

    It doesn't seem to actually locate the link elements.

  3. I need the plain text of the link. I imagine it would be something like searchLink.text if I actually got the link elements in the first place.

Finally, in the actual API reference for lxml, I wasn't able to find information on the find and the findall calls. I gleaned these from bits of code I found on google. Am I missing something about how to effectively find and iterate over HTML tags using lxml?

Cody Gray - on strike
  • 239,200
  • 50
  • 490
  • 574
Shaheeb Roshan
  • 611
  • 1
  • 7
  • 17

2 Answers2

27

Okay, first, in regards to parsing the HTML: if you follow the recommendation of zweiterlinde and S.Lott at least use the version of beautifulsoup included with lxml. That way you will also reap the benefit of a nice xpath or css selector interface.

However, I personally prefer Ian Bicking's HTML parser included in lxml.

Secondly, .find() and .findall() come from lxml trying to be compatible with ElementTree, and those two methods are described in XPath Support in ElementTree.

Those two functions are fairly easy to use but they are very limited XPath. I recommend trying to use either the full lxml xpath() method or, if you are already familiar with CSS, using the cssselect() method.

Here are some examples, with an HTML string parsed like this:

from lxml.html import fromstring
mySearchTree = fromstring(your_input_string)

Using the css selector class your program would roughly look something like this:

# Find all 'a' elements inside 'tr' table rows with css selector
for a in mySearchTree.cssselect('tr a'):
    print 'found "%s" link to href "%s"' % (a.text, a.get('href'))

The equivalent using xpath method would be:

# Find all 'a' elements inside 'tr' table rows with xpath
for a in mySearchTree.xpath('.//tr/*/a'):
    print 'found "%s" link to href "%s"' % (a.text, a.get('href'))
rypel
  • 4,686
  • 2
  • 25
  • 36
Van Gale
  • 43,536
  • 9
  • 71
  • 81
  • Yay! Just what I needed. I interpreted the cssselect to actually require the elements to have a declared css class. The nested finding logic is just what I needed! Thank you Van Gale! – Shaheeb Roshan Mar 02 '09 at 20:48
  • This page recommends to use iterchildren and iterdescendants with the tag option. http://www.ibm.com/developerworks/xml/library/x-hiperfparse/#N10239 – endolith Jan 26 '10 at 04:58
  • 1
    Great answer, but as a minor quibble -- why `.//tr/*/a` rather than `.//tr//a`? The former would fail to find anything with an extra intervening tag, ie. `..` – Charles Duffy Mar 19 '12 at 16:16
5

Is there a reason you're not using Beautiful Soup for this project? It will make dealing with imperfectly formed documents much easier.

zweiterlinde
  • 14,557
  • 2
  • 27
  • 32
  • 2
    I started with Beautiful Soup, but I had no luck. I mentioned in my question that my doc is fairly well-formed, but it is missing the ending body block. It simply drops all the content when I pull it into the parser. Hence lxml. Also, http://tinyurl.com/37u9gu indicated better mem mgmt with lxml – Shaheeb Roshan Mar 02 '09 at 20:22
  • 7
    I used BeautifulSoup at first, but it doesn't handle bad HTML as well as it claims. It also doesn't support items with multiple classes, etc. lxml.html is better for everything I've done with it. – endolith Jan 26 '10 at 04:59
  • 11
    BeautifulSoup is (a) unmaintained, (b) slower than lxml, and (c) less powerful than lxml. – Humphrey Bogart Feb 22 '11 at 17:34
  • 2
    @BeauMartínez: I know this post is a year old, but just to keep users informed: BS *is* currently maintained; there was even a new release recently. And it does use lxml internally depending on the constructor argument you use. – ThiefMaster Mar 29 '12 at 05:40