10

Trying to get my head around Scrapy but hitting a few dead ends.

I have a 2 Tables on a page and would like to extract the data from each one then move along to the next page.

Tables look like this (First one is called Y1, 2nd is Y2) and structures are the same.

<div id="Y1" style="margin-bottom: 0px; margin-top: 15px;">
                                <h2>First information</h2><hr style="margin-top: 5px; margin-bottom: 10px;">                    

                <table class="table table-striped table-hover table-curved">
                    <thead>
                        <tr>
                            <th class="tCol1" style="padding: 10px;">First Col Head</th>
                            <th class="tCol2" style="padding: 10px;">Second Col Head</th>
                            <th class="tCol3" style="padding: 10px;">Third Col Head</th>
                        </tr>
                    </thead>
                    <tbody>

                        <tr>
                            <td>Info 1</td>
                            <td>Monday 5 September, 2016</td>
                            <td>Friday 21 October, 2016</td>
                        </tr>
                        <tr class="vevent">
                            <td class="summary"><b>Info 2</b></td>
                            <td class="dtstart" timestamp="1477094400"><b></b></td>
                            <td class="dtend" timestamp="1477785600">
                            <b>Sunday 30 October, 2016</b></td>
                        </tr>
                        <tr>
                            <td>Info 3</td>
                            <td>Monday 31 October, 2016</td>
                            <td>Tuesday 20 December, 2016</td>
                        </tr>


                    <tr class="vevent">
                        <td class="summary"><b>Info 4</b></td>                      
                        <td class="dtstart" timestamp="1482278400"><b>Wednesday 21 December, 2016</b></td>
                        <td class="dtend" timestamp="1483315200">
                        <b>Monday 2 January, 2017</b></td>
                    </tr>



                </tbody>
            </table>

As you can see, the structure is a little inconsistent but as long as I can get each td and output to csv then I'll be a happy guy.

I tried using xPath but this only confused me more.

My last attempt:

import scrapy

class myScraperSpider(scrapy.Spider):
name = "myScraper"

allowed_domains = ["mysite.co.uk"]
start_urls =    (
                'https://mysite.co.uk/page1/',
                )

def parse_products(self, response):
    products = response.xpath('//*[@id="Y1"]/table')
    # ignore the table header row
    for product in products[1:]  
       item = Schooldates1Item()
       item['hol'] = product.xpath('//*[@id="Y1"]/table/tbody/tr[1]/td[1]').extract()[0]
       item['first'] = product.xpath('//*[@id="Y1"]/table/tbody/tr[1]/td[2]').extract()[0]
       item['last'] = product.xpath('//*[@id="Y1"]/table/tbody/tr[1]/td[3]').extract()[0]
       yield item

No errors here but it just fires back lots of information about the crawl but no actual results.

Update:

  import scrapy

       class SchoolSpider(scrapy.Spider):
name = "school"

allowed_domains = ["termdates.co.uk"]
start_urls =    (
                'https://termdates.co.uk/school-holidays-16-19-abingdon/',
                )

  def parse_products(self, response):
  products = sel.xpath('//*[@id="Year1"]/table//tr')
 for p in products[1:]:
  item = dict()
  item['hol'] = p.xpath('td[1]/text()').extract_first()
  item['first'] = p.xpath('td[1]/text()').extract_first()
  item['last'] = p.xpath('td[1]/text()').extract_first()
  yield item

This give me: IndentationError: unexpected indent

if I run the amended script below (thanks to @Granitosaurus) to output to CSV (-o schoolDates.csv) I get an empty file:

import scrapy

class SchoolSpider(scrapy.Spider):
name = "school"
allowed_domains = ["termdates.co.uk"]
start_urls = ('https://termdates.co.uk/school-holidays-16-19-abingdon/',)

def parse_products(self, response):
    products = sel.xpath('//*[@id="Year1"]/table//tr')
    for p in products[1:]:
        item = dict()
        item['hol'] = p.xpath('td[1]/text()').extract_first()
        item['first'] = p.xpath('td[1]/text()').extract_first()
        item['last'] = p.xpath('td[1]/text()').extract_first()
        yield item

This is the log:

  • 2017-03-23 12:04:08 [scrapy.core.engine] INFO: Spider opened 2017-03-23 12:04:08 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2017-03-23 12:04:08 [scrapy.extensions.telnet] DEBUG: Telnet console listening on ... 2017-03-23 12:04:08 [scrapy.core.engine] DEBUG: Crawled (200) https://termdates.co.uk/robots.txt> (referer: None) 2017-03-23 12:04:08 [scrapy.core.engine] DEBUG: Crawled (200) https://termdates.co.uk/school-holidays-16-19-abingdon/> (referer: None) 2017-03-23 12:04:08 [scrapy.core.scraper] ERROR: Spider error processing https://termdates.co.uk/school-holidays-16-19-abingdon/> (referer: None) Traceback (most recent call last): File "c:\python27\lib\site-packages\twisted\internet\defer.py", line 653, in _ runCallbacks current.result = callback(current.result, *args, **kw) File "c:\python27\lib\site-packages\scrapy-1.3.3-py2.7.egg\scrapy\spiders__init__.py", line 76, in parse raise NotImplementedError NotImplementedError 2017-03-23 12:04:08 [scrapy.core.engine] INFO: Closing spider (finished) 2017-03-23 12:04:08 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 467, 'downloader/request_count': 2, 'downloader/request_method_count/GET': 2, 'downloader/response_bytes': 11311, 'downloader/response_count': 2, 'downloader/response_status_count/200': 2, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2017, 3, 23, 12, 4, 8, 845000), 'log_count/DEBUG': 3, 'log_count/ERROR': 1, 'log_count/INFO': 7, 'response_received_count': 2, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'spider_exceptions/NotImplementedError': 1, 'start_time': datetime.datetime(2017, 3, 23, 12, 4, 8, 356000)} 2017-03-23 12:04:08 [scrapy.core.engine] INFO: Spider closed (finished)

Update 2: (Skips row) This pushes result to csv file but skips every other row.

The Shell shows {'hol': None, 'last': u'\r\n\t\t\t\t\t\t\t\t', 'first': None}

import scrapy

class SchoolSpider(scrapy.Spider):
name = "school"
allowed_domains = ["termdates.co.uk"]
start_urls = ('https://termdates.co.uk/school-holidays-16-19-abingdon/',)

def parse(self, response):
    products = response.xpath('//*[@id="Year1"]/table//tr')
    for p in products[1:]:
        item = dict()
        item['hol'] = p.xpath('td[1]/text()').extract_first()
        item['first'] = p.xpath('td[2]/text()').extract_first()
        item['last'] = p.xpath('td[3]/text()').extract_first()
        yield item

Solution: Thanks to @vold This crawls all pages in start_urls and deals with the inconsistent table layout

# -*- coding: utf-8 -*-
import scrapy
from SchoolDates_1.items import Schooldates1Item

class SchoolSpider(scrapy.Spider):
name = "school"
allowed_domains = ["termdates.co.uk"]
start_urls = ('https://termdates.co.uk/school-holidays-16-19-abingdon/',
              'https://termdates.co.uk/school-holidays-3-dimensions',)

def parse(self, response):
    products = response.xpath('//*[@id="Year1"]/table//tr')
    # ignore the table header row
    for product in products[1:]:
        item = Schooldates1Item()
        item['hol'] = product.xpath('td[1]//text()').extract_first()
        item['first'] = product.xpath('td[2]//text()').extract_first()
        item['last'] = ''.join(product.xpath('td[3]//text()').extract()).strip()
        item['url'] = response.url
        yield item
stonk
  • 305
  • 2
  • 3
  • 15
  • 2
    Please provide more information: What did you try? What code? Which XPATH expression confuses you? Did you read the Scrapy Tutorial about [Selectors](https://doc.scrapy.org/en/latest/topics/selectors.html)? – rfelten Mar 22 '17 at 09:58
  • Hi rfelten, I have added my latest code above. Thanks. – stonk Mar 22 '17 at 10:20
  • Can you provide a link to the site you want to parse? Also, try not using `tbody` in xpath expression. – vold Mar 22 '17 at 11:12
  • @vold any reason not to use tbody? seems like an obvious way to avoid parsing header rows. – Granitosaurus Mar 22 '17 at 11:36
  • @stutray tbody is added by browsers like Mozilla and Chrome, and it does not exists in original HTML source-code. – Umair Ayub Mar 22 '17 at 11:57
  • `This give me: IndentationError: unexpected indent` your code is not indented correctly. This is the correct indentation: https://pastebin.mozilla.org/8982858 – Granitosaurus Mar 22 '17 at 14:01
  • Thanks folks. Please see edit for further issues – stonk Mar 23 '17 at 08:35
  • @stutray did you use scrapy [shell](https://doc.scrapy.org/en/latest/topics/shell.html) for debugging and checking if scrapy return something? If not, give a try, it's a perfect tool for debug. – vold Mar 23 '17 at 11:48
  • Try to rename `parse_products` to `parse`. See this answer http://stackoverflow.com/questions/34600064/scrapy-request-return-notimplementederror – vold Mar 23 '17 at 12:46
  • I changed `parse_products` to`parse`. An error shows as: **NameError: global name 'sel' is not defined** - FIXED - Changed 'sel.xpath' to 'response.xpath' – stonk Mar 23 '17 at 13:16
  • See **Update 2**. It seems to be skipping rows – stonk Mar 23 '17 at 13:33
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/138839/discussion-between-vold-and-stutray). – vold Mar 23 '17 at 14:58

3 Answers3

11

You need to slightly correct your code. Since you already select all elements within the table you don't need to point again to a table. Thus you can shorten your xpath to something like thistd[1]//text().

def parse_products(self, response):
    products = response.xpath('//*[@id="Year1"]/table//tr')
    # ignore the table header row
    for product in products[1:]  
       item = Schooldates1Item()
       item['hol'] = product.xpath('td[1]//text()').extract_first()
       item['first'] = product.xpath('td[2]//text()').extract_first()
       item['last'] = product.xpath('td[3]//text()').extract_first()
       yield item

Edited my answer since @stutray provide the link to a site.

vold
  • 1,549
  • 1
  • 13
  • 19
  • 1
    `./` is not neccessary here, since expression is already bound to first level. If you'd look for any descendant you would need to use `.//` indeed. Also `extract_first()` is a relatively new shortcut to `extract()[0]` :) – Granitosaurus Mar 22 '17 at 11:35
  • I agree and correct my answer, but without link to a site, it's all I can suggest :) – vold Mar 22 '17 at 11:39
5

You can use CSS Selectors instead of xPaths, I always find CSS Selectors easy.

def parse_products(self, response):

    for table in response.css("#Y1 table")[1:]:
       item = Schooldates1Item()
       item['hol'] = product.css('td:nth-child(1)::text').extract_first()
       item['first'] = product.css('td:nth-child(2)::text').extract_first()
       item['last'] = product.css('td:nth-child(3)::text').extract_first()
       yield item

Also do not use tbody tag in selectors. Source:

Firefox, in particular, is known for adding elements to tables. Scrapy, on the other hand, does not modify the original page HTML, so you won’t be able to extract any data if you use in your XPath expressions.

Umair Ayub
  • 19,358
  • 14
  • 72
  • 146
0

I got it working with these xpaths for the html source you've provided:

products = sel.xpath('//*[@id="Y1"]/table//tr')
for p in products[1:]:
    item = dict()
    item['hol'] = p.xpath('td[1]/text()').extract_first()
    item['first'] = p.xpath('td[1]/text()').extract_first()
    item['last'] = p.xpath('td[1]/text()').extract_first()
    yield item

Above assumes that each table row contains 1 item.

Granitosaurus
  • 20,530
  • 5
  • 57
  • 82
  • TBODY is added by browser like Mozilla and Chrome and it doesnt exists in source-code of HTML response, so your xpath wont work. – Umair Ayub Mar 22 '17 at 11:55
  • @Umair well in the context of OP's code it would work :P. Also you imply that OP doesn't use a browser or some rendering to download the source. So in the context of this question my original answer would work, but adjusted the answet nevertheless to reflect your point. – Granitosaurus Mar 22 '17 at 12:01
  • Thanks for all your input folks. Please see my edit with the site i'm trying to scrape – stonk Mar 22 '17 at 13:15
  • quick question - why is it `td[1]` for each of the xpaths - are the `td` s being removed by `.extract_first()` – David Jun 14 '17 at 15:46