10

Please take a look at this spider example in Scrapy documentation. The explanation is:

This spider would start crawling example.com’s home page, collecting category links, and item links, parsing the latter with the parse_item method. For each item response, some data will be extracted from the HTML using XPath, and a Item will be filled with it.

I copied the same spider exactly, and replaced "example.com" with another initial url.

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item
from stb.items import StbItem

class StbSpider(CrawlSpider):
    domain_name = "stb"
    start_urls = ['http://www.stblaw.com/bios/MAlpuche.htm']

    rules = (Rule(SgmlLinkExtractor(allow=(r'/bios/.\w+\.htm', )), callback='parse', follow=True), )

    def parse(self, response):
        hxs = HtmlXPathSelector(response)

        item = StbItem()
        item['JD'] = hxs.select('//td[@class="bodycopysmall"]').re('\d\d\d\d\sJ.D.')
        return item

SPIDER = StbSpider()

But my spider "stb" does not collect links from "/bios/" as it is supposed to do. It runs the initial url, scrapes the item['JD'] and writes it on a file and then quits.

Why is it that SgmlLinkExtractor is ignored? The Rule is read because it catches syntax errors inside the Rule line.

Is this a bug? is there something wrong in my code? There are no errors except a bunch unhandled errors that I see with every run.

It would be nice to know what I am doing wrong here. Thanks for any clues. Am I misunderstanding what SgmlLinkExtractor is supposed to do?

Zeynel
  • 13,145
  • 31
  • 100
  • 145
  • When I see "There are no errors except a bunch unhandled errors that I see with every run," I have to scratch my head. – Jonathan Feinberg Nov 28 '09 at 01:12
  • Sorry, I see Deprecation Warnings. The errors that I was seeing were due to having telnet and shell open at the same time as mentioned by Pablo Hoffman here http://stackoverflow.com/questions/1767553/twisted-errors-in-scrapy-spider and when I closed the shell, I don't see them any more. Any clues why allowed links are not scraped? – Zeynel Nov 28 '09 at 01:40

1 Answers1

11

The parse function is actually implemented and used in the CrawlSpider class, and you're unintentionally overriding it. If you change the name to something else, like parse_item, then the Rule should work.

Jacob
  • 4,204
  • 1
  • 25
  • 25
  • Thanks. I wrote a very simple python spider and that works for me. – Zeynel Jan 21 '10 at 15:07
  • Interestingly, I have the same problem. When I change it to something else however, get an "Not Implemented Error" for "parse". – bdd Mar 10 '11 at 19:13
  • Are you inheriting from CrawlSpider? If not, then you do need a method named "parse". – Jacob Mar 14 '11 at 00:00