14

I'm trying to create a custom Scrapy Item Exporter based off JsonLinesItemExporter so I can slightly alter the structure it produces.

I have read the documentation here http://doc.scrapy.org/en/latest/topics/exporters.html but it doesn't state how to create a custom exporter, where to store it or how to link it to your Pipeline.

I have identified how to go custom with the Feed Exporters but this is not going to suit my requirements, as I want to call this exporter from my Pipeline.

Here is the code I've come up with which has been stored in a file in the root of the project called exporters.py


from scrapy.contrib.exporter import JsonLinesItemExporter

class FanItemExporter(JsonLinesItemExporter):

    def __init__(self, file, **kwargs):
        self._configure(kwargs, dont_fail=True)
        self.file = file
        self.encoder = ScrapyJSONEncoder(**kwargs)
        self.first_item = True

    def start_exporting(self):
        self.file.write("""{
            'product': [""")

    def finish_exporting(self):
        self.file.write("]}")

    def export_item(self, item):
        if self.first_item:
            self.first_item = False
        else:
            self.file.write(',\n')
        itemdict = dict(self._get_serialized_fields(item))
        self.file.write(self.encoder.encode(itemdict))

I have simply tried calling this from my pipeline by using FanItemExporter and trying variations of the import but it's not resulting in anything.

404pio
  • 1,080
  • 1
  • 12
  • 32
bnussey
  • 1,894
  • 1
  • 19
  • 34
  • Can you show how have you tried to use the exporter and what errors/outcome did you get? Thanks. – alecxe Oct 23 '15 at 00:27
  • Hey @alexce so I tried to call the exporter in the pipeline but it didn't detect it. Any ideas? – bnussey Oct 23 '15 at 03:27
  • 1
    Can you post your entries of `settings.py` where you configure your exporter? And how do you call it from your pipeline? – GHajba Oct 23 '15 at 05:46

3 Answers3

28

This answer is now outdated See Max's answer for an easier approach

It is true that the Scrapy documentation does not clearly state where to place an Item Exporter. To use an Item Exporter, these are the steps to follow.

  1. Choose an Item Exporter class and import it to pipeline.py in the project directory. It could be a pre-defined Item Exporter (ex. XmlItemExporter) or user-defined (like FanItemExporter defined in the question)
  2. Create an Item Pipeline class in pipeline.py. Instantiate the imported Item Exporter in this class. Details will be explained in the later part of the answer.
  3. Now, register this pipeline class in settings.py file.

Following is a detailed explanation of each step. Solution to the question is included in each step.

Step 1

  • If using a pre-defined Item Exporter class, import it from scrapy.exporters module.
    Ex: from scrapy.exporters import XmlItemExporter

  • If you need a custom exporter, define a custom class in a file. I suggest placing the class in exporters.py file. Place this file in the project folder (where settings.py, items.py reside).

    While creating a new sub-class, it is always a good idea to import BaseItemExporter. It would be apt if we intend to change the functionality entirely. However, in this question, most of the functionality is close to JsonLinesItemExporter.

Hence, I am attaching two versions of the same ItemExporter. One version extends BaseItemExporter class and the other extends JsonLinesItemExporter class

Version 1: Extending BaseItemExporter

Since BaseItemExporter is the parent class, start_exporting(), finish_exporting(), export_item() must be overrided to suit our needs.

from scrapy.exporters import BaseItemExporter
from scrapy.utils.serialize import ScrapyJSONEncoder
from scrapy.utils.python import to_bytes

class FanItemExporter(BaseItemExporter):

    def __init__(self, file, **kwargs):
        self._configure(kwargs, dont_fail=True)
        self.file = file
        self.encoder = ScrapyJSONEncoder(**kwargs)
        self.first_item = True

    def start_exporting(self):
        self.file.write(b'{\'product\': [')

    def finish_exporting(self):
        self.file.write(b'\n]}')

    def export_item(self, item):
        if self.first_item:
            self.first_item = False
        else:
            self.file.write(b',\n')
        itemdict = dict(self._get_serialized_fields(item))
        self.file.write(to_bytes(self.encoder.encode(itemdict)))

Version 2: Extending JsonLinesItemExporter

JsonLinesItemExporter provides the exact same implementation of export_item() method. Therefore only start_exporting() and finish_exporting() methods are overrided.

Implementation of JsonLinesItemExporter can be seen in the folder python_dir\pkgs\scrapy-1.1.0-py35_0\Lib\site-packages\scrapy\exporters.py

from scrapy.exporters import JsonItemExporter
    
class FanItemExporter(JsonItemExporter):

    def __init__(self, file, **kwargs):
        # To initialize the object using JsonItemExporter's constructor
        super().__init__(file)

    def start_exporting(self):
        self.file.write(b'{\'product\': [')
    
    def finish_exporting(self):
        self.file.write(b'\n]}')

Note: When writing data to file, it is important to note that the standard Item Exporter classes expect binary files. Hence, the file must be opened in binary mode (b). For the same reason, write() method in both the version write bytes to file.

Step 2

Creating an Item Pipeline class.

from project_name.exporters import FanItemExporter

class FanExportPipeline(object):
    def __init__(self, file_name):
        # Storing output filename
        self.file_name = file_name
        # Creating a file handle and setting it to None
        self.file_handle = None

    @classmethod
    def from_crawler(cls, crawler):
        # getting the value of FILE_NAME field from settings.py
        output_file_name = crawler.settings.get('FILE_NAME')
        
        # cls() calls FanExportPipeline's constructor
        # Returning a FanExportPipeline object
        return cls(output_file_name)
    
    def open_spider(self, spider):
        print('Custom export opened')

        # Opening file in binary-write mode
        file = open(self.file_name, 'wb')
        self.file_handle = file

        # Creating a FanItemExporter object and initiating export
        self.exporter = FanItemExporter(file)
        self.exporter.start_exporting()
    
    def close_spider(self, spider):
        print('Custom Exporter closed')

        # Ending the export to file from FanItemExport object
        self.exporter.finish_exporting()

        # Closing the opened output file
        self.file_handle.close()
    
    def process_item(self, item, spider):
        # passing the item to FanItemExporter object for expoting to file
        self.exporter.export_item(item)
        return item

Step 3

Since the Item Export Pipeline is defined, register this pipeline in settings.py file. Also add the field FILE_NAME to settings.py file. This field contains the filename of the output file.

Add the following lines to settings.py file.

FILE_NAME = 'path/outputfile.ext'
ITEM_PIPELINES = {
    'project_name.pipelines.FanExportPipeline' : 600,
}

If ITEM_PIPELINES is already uncommented, then add the following line to the ITEM_PIPELINES dictionary.

'project_name.pipelines.FanExportPipeline' : 600,

This is one way to create a custom Item Export pipeline.

pbskumar
  • 1,127
  • 12
  • 14
  • 3
    I've seen a similar example in the guide, but using a built-in exporter. It's a bit redundant to always use the pipeline, though. Aren't any way to bypass it? – Lore Feb 20 '18 at 16:13
  • I suspect that this solution is outdated (or about to be) and possibly deprecated since it is using `crawler`. It would have been great to see an alternative solution, not using already present classes to be overridden. For example, something like an [SQLite3](https://github.com/RockyZ/Scrapy-sqlite-item-exporter) exporter. (That one doesn't seem to work either.) – not2qubit Oct 06 '18 at 18:00
  • Thank you. Maybe someone can contribute to this issue: https://github.com/scrapy/scrapy/issues/5706 – wowiamhere Jan 10 '23 at 09:42
  • 2
    @Lore here is a way without custom pipeline now https://stackoverflow.com/a/76573895/502263 Sorry for necroposting :) – Max Jun 28 '23 at 14:00
  • @Max Thanks posting an easier way to do this. – pbskumar Jul 05 '23 at 22:16
0

You don't need to create a custom pipeline now as described here https://stackoverflow.com/a/38626022/502263. Just create your custom exporter in the exporters.py (use any module name you like but change configuration values accordingly):

from scrapy.exporters import JsonLinesItemExporter as BaseExporter
from pint import Quantity


class JsonLinesItemExporter(BaseExporter):
    def serialize_field(self, field, name, value):
        if isinstance(value, Quantity):
            value = str(value)
        return super().serialize_field(field, name, value)

And add your exporter to FEED_EXPORTERS in the settings.py for related export formats:

FEED_EXPORTERS = {
    "jsonlines": "myproject.exporters.JsonLinesItemExporter",
    "jsonl": "myproject.exporters.JsonLinesItemExporter",
    "jl": "myproject.exporters.JsonLinesItemExporter",
}

See default exporters in scrapy docs https://docs.scrapy.org/en/latest/topics/feed-exports.html#feed-exporters

Max
  • 153
  • 1
  • 7
  • I believe I found a simpler way @max https://stackoverflow.com/a/76723920/12675102 – R3_ Jul 19 '23 at 17:40
0

AFAIK you only need to set the serializer when you define the item field

field = scrapy.Field(serializer=str)

If it's really important to check if the type is Quantity, then creating a custom serializer should be enough

from pint import Quantity    

def my_serializer(value):
  return str(value) if isinstance(value, Quantity) else value

class Item(scrapy.Item):
  field = scrapy.Field(serializer=my_serializer)
R3_
  • 46
  • 4