1

I am having scope issues with returning a Scrapy item (players) in my pipeline. I'm fairly certain I know what the issue is but I'm not sure how to integrate the solution into my code. I am also certain that I now have the code correctly written for the pipeline to process. It's just I've declared the players item inside the parseRoster() function so I know it's scope is only limited to that function.

Now my question is, where do I need to declare a players item in my code for it to be visible to my pipeline? My goal is to get this data to my database. I will assume that it will be in the main loop of my code, and if this is the case, how can I return both item & my newly declared player item?

My code is below:

class NbastatsSpider(scrapy.Spider):
    name = "nbaStats"

    start_urls = [
        "http://espn.go.com/nba/teams"                                                                              ##only start not allowed because had some issues when navigated to team roster pages
        ]
    def parse(self,response):
        items = []                                                                                                  ##array or list that stores TeamStats item
        i=0                                                                                                         ##counter needed for older code

        for division in response.xpath('//div[@id="content"]//div[contains(@class, "mod-teams-list-medium")]'):     
            for team in division.xpath('.//div[contains(@class, "mod-content")]//li'):
                item = TeamStats()
        

                item['division'] = division.xpath('.//div[contains(@class, "mod-header")]/h4/text()').extract()[0]            
                item['team'] = team.xpath('.//h5/a/text()').extract()[0]
                item['rosterurl'] = "http://espn.go.com" + team.xpath('.//div/span[2]/a[3]/@href').extract()[0]
                items.append(item)
                request = scrapy.Request(item['rosterurl'], callback = self.parseWPNow)
                request.meta['play'] = item

                yield request
                
        print(item)      

    def parseWPNow(self, response):
        item = response.meta['play']
        item = self.parseRoster(item, response)

        return item

    def parseRoster(self, item, response):
        players = Player()
        int = 0
        for player in response.xpath("//td[@class='sortcell']"):
            players['name'] = player.xpath("a/text()").extract()[0]
            players['position'] = player.xpath("following-sibling::td[1]").extract()[0]
            players['age'] = player.xpath("following-sibling::td[2]").extract()[0]
            players['height'] = player.xpath("following-sibling::td[3]").extract()[0]
            players['weight'] = player.xpath("following-sibling::td[4]").extract()[0]
            players['college'] = player.xpath("following-sibling::td[5]").extract()[0]
            players['salary'] = player.xpath("following-sibling::td[6]").extract()[0]
            yield players
        item['playerurl'] = response.xpath("//td[@class='sortcell']/a").extract()
        yield item
Tonechas
  • 13,398
  • 16
  • 46
  • 80
user3042850
  • 323
  • 1
  • 3
  • 15

1 Answers1

3

According to the relevant part of the Scrapy's data flow:

The Engine sends scraped Items (returned by the Spider) to the Item Pipeline and Requests (returned by spider) to the Scheduler

In other words, return/yield your item instances from the spider and then use them in the process_item() method of your pipeline. Since you have multiple item classes, distinguish them by using isinstance() built-in function:

def process_item(self, item, spider):
    if isinstance(item, TeamStats):
        # process team stats

    if isinstance(item, Player):
        # process player
alecxe
  • 462,703
  • 120
  • 1,088
  • 1,195