24

I want to use the Python Scrapy module to scrape all the URLs from my website and write the list to a file. I looked in the examples but didn't see any simple example to do this.

Kara
  • 6,115
  • 16
  • 50
  • 57
Adam F
  • 1,151
  • 1
  • 11
  • 16
  • 7
    StackOverflow isn't a site to ask people to write your code for you - *try something* and then come ask a question about a specific problem you run into. – Amber Mar 05 '12 at 02:47
  • Have you tried the tutorial there? It's quite self explanatory. If you /have/ tried the tutorial and still have trouble, try posting some code that you've tried first (+1 @Amber) – inspectorG4dget Mar 05 '12 at 02:58
  • 3
    Amber, and inspectorG4dget, I wrote the program that does this, but can't post it yet because I don't have enough reputation - there's a waiting time. I'll post the solution tomorrow morning. – Adam F Mar 05 '12 at 06:16

2 Answers2

52

Here's the python program that worked for me:

from scrapy.selector import HtmlXPathSelector
from scrapy.spider import BaseSpider
from scrapy.http import Request

DOMAIN = 'example.com'
URL = 'http://%s' % DOMAIN

class MySpider(BaseSpider):
    name = DOMAIN
    allowed_domains = [DOMAIN]
    start_urls = [
        URL
    ]

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        for url in hxs.select('//a/@href').extract():
            if not ( url.startswith('http://') or url.startswith('https://') ):
                url= URL + url 
            print url
            yield Request(url, callback=self.parse)

Save this in a file called spider.py.

You can then use a shell pipeline to post process this text:

bash$ scrapy runspider spider.py > urls.out
bash$ cat urls.out| grep 'example.com' |sort |uniq |grep -v '#' |grep -v 'mailto' > example.urls

This gives me a list of all the unique urls in my site.

davegaeddert
  • 3,139
  • 1
  • 21
  • 20
Adam F
  • 1,151
  • 1
  • 11
  • 16
  • 4
    That's cool. You have got the answer. Now go ahead and accept the answer... and, oh yeah, there might be a "Self Learner" badge waiting for you. :) – Nishant Mar 06 '12 at 04:34
  • 1
    There's a small bug in this program. The line `if not url.startswith('http://'):` won't handle https links correctly. – Joshua Snider Jun 27 '15 at 17:24
  • @JoshuaSnider I updated it. But this is a short snippet of sample code, so it's not meant to be authoritative for all situations. – Adam F Jun 27 '15 at 22:18
15

something cleaner (and maybe more useful) would be using LinkExtractor

from scrapy.linkextractors import LinkExtractor

    def parse(self, response):
        le = LinkExtractor() # empty for getting everything, check different options on documentation
        for link in le.extract_links(response):
            yield Request(link.url, callback=self.parse)
eLRuLL
  • 18,488
  • 9
  • 73
  • 99