1

This is my web crawler which generates an item containing a title, a url and a name

import scrapy
from ..items import ContentsPageSFBItem

class BasicSpider(scrapy.Spider):
    name = "contentspage_sfb"
    #allowed_domains = ["web"]
    start_urls = [
        'https://www.safaribooksonline.com/library/view/shell-programming-in/9780134496696/',
        'https://www.safaribooksonline.com/library/view/cisa-certified-information/9780134677453/'
    ]

    def parse(self, response):
            item = ContentsPageSFBItem()

            #from scrapy.shell import inspect_response
            #inspect_response(response, self)

            content_items = response.xpath('//ol[@class="detail-toc"]//a/text()').extract()

            for content_item in content_items:

                item['content_item'] = content_item
                item["full_url"] = response.url
                item['title'] = response.xpath('//title[1]/text()').extract()

                yield item

The code works perfectly. However, due to the nature of the crawling a lot of data is generated.My intention is to divide the results with one url being parsed and the results being stored in one csv file. I am making use of the following code

from scrapy import signals
from scrapy.contrib.exporter import CsvItemExporter


class ContentspageSfbPipeline(object):
    def __init__(self):
        self.files = {}

    def from_crawler(cls, crawler):
        pipeline = cls()
        crawler.signals.connect(pipeline.spider_opened, signals.spider_opened)
        crawler.signals.connect(pipeline.spider_closed, signals.spider_closed)
        return pipeline


    def spider_opened(self, contentspage_sfb):
        file = open('results/%s.csv' % contentspage_sfb.url, 'w+b')
        self.files[contentspage_sfb] = file
        self.exporter = CsvItemExporter(file)
        self.exporter.fields_to_export = ['item']
        self.exporter.start_exporting()


    def spider_closed(self, contentspage_sfb):
        self.exporter.finish_exporting()
        file = self.files.pop(contentspage_sfb)
        file.close()


    def process_item(self, item, contentspage_sfb):
        self.exporter.export_item(item)
        return item

However, I get an error:

TypeError: unbound method from_crawler() must be called with ContentspageSfbPipeline instance as first argument (got Crawler instance instead)

As suggested, I added the decorator before the from_crawler function. However, now I get attribute errors.

Traceback (most recent call last):
  File "/home/eadaradhiraj/program_files/venv/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 653, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "/home/eadaradhiraj/program_files/pycharm_projects/javascriptlibraries/javascriptlibraries/pipelines.py", line 39, in process_item
    self.exporter.export_item(item)
AttributeError: 'ContentspageSfbPipeline' object has no attribute 'exporter'

I have based my code from How to split output from a list of urls in scrapy

Community
  • 1
  • 1
Echchama Nayak
  • 971
  • 3
  • 23
  • 44

1 Answers1

2

You are missing @classmethod decorator for from_crawler() method of yours.

See related Meaning of @classmethod and @staticmethod for beginner? for what classmethods are.

Also you don't need to connect any signals in your pipeline. Pipelines can contain open_spider and close_spider methods, as per official docs

Community
  • 1
  • 1
Granitosaurus
  • 20,530
  • 5
  • 57
  • 82