0

So I want to scrape data from this site, especially from the company details part:

Site to crawl

I got some help from a person to get it work with python playwright but I need to get this done with the python scrapy-selenium.

I want to rewrite the code from the answer here to a scrapy-selenium way.

Original Question

I tried it to do it like in this issue is suggested

scrapy-selenium

But no luck =/

My Code:

resources/search_results_searchpage.yml:

products:
    css: 'div[data-content="productItem"]'
    multiple: true
    type: Text
    children:
        link:
            css: a.elements-title-normal 
            type: Link

crawler.py:

import scrapy
import csv
from scrapy_selenium import SeleniumRequest
import os
from selectorlib import Extractor
from scrapy import Selector

class Spider(scrapy.Spider):
    name = 'alibaba_crawler'
    allowed_domains = ['alibaba.com']
    start_urls = ['http://alibaba.com/']
    link_extractor = Extractor.from_yaml_file(os.path.join(os.path.dirname(__file__), "../resources/search_results_searchpage.yml"))

    def start_requests(self):
        search_text="Headphones"
        url="https://www.alibaba.com/trade/search?fsb=y&IndexArea=product_en&CatId=&SearchText={0}&viewtype=G".format(search_text)

        yield SeleniumRequest(url=url, callback = self.parse, meta = {"search_text": search_text})


    def parse(self, response):
        data = self.link_extractor.extract(response.text, base_url=response.url)
        for product in data['products']:
            parsed_url=product["link"]

            yield SeleniumRequest(url=parsed_url, callback=self.crawl_mainpage)
    
    def crawl_mainpage(self, response):
        driver = response.request.meta['driver']
        button = driver.find_element_by_xpath( "//span[@title='Company Profile']")
        button.click()
        driver.quit()

        yield {
            'name': response.xpath("//h1[@class='module-pdp-title']/text()").extract(),
            'Year of Establishment': response.xpath("//td[contains(text(), 'Year Established')]/following-sibling::td/div/div/div/text()").extract()
         }
        

run code with:

scrapy crawl alibaba_crawler -o out.csv -t csv

Company name is getting returned correctly. Year of Establishment is still empty and should return a Year.

2 Answers2

0

I didn't use the selector correctly. This is working now correctly

def crawl_mainpage(self, response):
    driver = response.request.meta['driver']
    driver.find_element_by_xpath( "//span[@title='Company Profile']").click()
    sel = Selector(text=driver.page_source)
    driver.quit()

    yield {
    sel.xpath("//td[contains(text(), 'Year Established')]/following-sibling::td/div/div/div/text()").extract()
    }
0

See below implementation using the scrapy-selenium library. Selenium is very slow for webscraping. It is advisable to use alternative methods such as scrapy-splash or scrapy-playwright. To scrape just 2 pages it took over 22 seconds whereas scrapy-playwright took less than 5 seconds.

import scrapy
from scrapy.crawler import CrawlerProcess
import os
from selectorlib import Extractor
from scrapy_selenium import SeleniumRequest
from shutil import which


class Spider(scrapy.Spider):
    name = 'alibaba_crawler'
    allowed_domains = ['alibaba.com']
    start_urls = ['http://alibaba.com/']
    link_extractor = Extractor.from_yaml_file(os.path.join(
        os.path.dirname(__file__), "../resources/search_results_searchpage.yml"))

    def start_requests(self):
        search_text = "Headphones"
        url = "https://www.alibaba.com/trade/search?fsb=y&IndexArea=product_en&CatId=&SearchText={0}&viewtype=G".format(
            search_text)
        yield scrapy.Request(url, callback=self.parse, meta={"search_text": search_text})

    def parse(self, response):
        data = self.link_extractor.extract(
            response.text, base_url=response.url)
        for product in data['products']:
            parsed_url = product["link"]

            yield SeleniumRequest(url=parsed_url, callback=self.crawl_mainpage, script='document.querySelector("span[title=\'Company Profile\']").click();')

    def crawl_mainpage(self, response):
        yield {
            'name': response.xpath("//h1[@class='module-pdp-title']/text()").extract_first(),
            'Year of Establishment': response.xpath("//td[contains(text(), 'Year Established')]/following-sibling::td/div/div/div/text()").extract_first()
        }

if __name__ == "__main__":
    process = CrawlerProcess(settings={'DOWNLOADER_MIDDLEWARES': {
        'scrapy_selenium.SeleniumMiddleware': 800
    },
        'SELENIUM_DRIVER_NAME': 'chrome',
        'SELENIUM_DRIVER_EXECUTABLE_PATH': which('chromedriver'),
        'SELENIUM_DRIVER_ARGUMENTS': ['--headless']
    })
    process.crawl(Spider)
    process.start()

Below see a sample crawl job. Note that i changed your extract() methods to extract_first() in order to return strings instead of lists. Sample running code snippet

msenior_
  • 1,913
  • 2
  • 11
  • 13