1

I am reading a list of urls from a links.csv file as follows:

/node/2553181
/node/2553439
/node/2552825

I am opening the url and scraping using code:

def start_requests(self):
        f = open('./links.csv','r').readlines()
        for url in f:
            url = 'https://example.com'+url.strip()
            yield scrapy.Request(url)
def parse_page(self, response):
        item = posts()
        item["poston"]=response.xpath("//h1[@id='page-subtitle']/text()").extract()
        item["postby"]=response.xpath("//div[contains(@id,'node-')]/div[1]/a/@href").extract()
        myposts=[]
        myposts.append(item)
        return myposts

I am able to export the content in single result.csv using:

scrapy crawl postspider -t csv -o result.csv

But i'd like to save the results in a separate file for each item as:

2553181.csv
2553439.csv
2552825.csv

The solution provided How can scrapy export items to separate csv files per item don't address my problem, as i need to export in different csv files as per the input url file names

Community
  • 1
  • 1
OmGanesh
  • 952
  • 1
  • 12
  • 24

0 Answers0