66

In the Scrapy docs, there is the following example to illustrate how to use an authenticated session in Scrapy:

class LoginSpider(BaseSpider):
    name = 'example.com'
    start_urls = ['http://www.example.com/users/login.php']

    def parse(self, response):
        return [FormRequest.from_response(response,
                    formdata={'username': 'john', 'password': 'secret'},
                    callback=self.after_login)]

    def after_login(self, response):
        # check login succeed before going on
        if "authentication failed" in response.body:
            self.log("Login failed", level=log.ERROR)
            return

        # continue scraping with authenticated session...

I've got that working, and it's fine. But my question is: What do you have to do to continue scraping with authenticated session, as they say in the last line's comment?

bcattle
  • 12,115
  • 6
  • 62
  • 82
Herman Schaaf
  • 46,821
  • 21
  • 100
  • 139

1 Answers1

73

In the code above, the FormRequest that is being used to authenticate has the after_login function set as its callback. This means that the after_login function will be called and passed the page that the login attempt got as a response.

It is then checking that you are successfully logged in by searching the page for a specific string, in this case "authentication failed". If it finds it, the spider ends.

Now, once the spider has got this far, it knows that it has successfully authenticated, and you can start spawning new requests and/or scrape data. So, in this case:

from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request

# ...

def after_login(self, response):
    # check login succeed before going on
    if "authentication failed" in response.body:
        self.log("Login failed", level=log.ERROR)
        return
    # We've successfully authenticated, let's have some fun!
    else:
        return Request(url="http://www.example.com/tastypage/",
               callback=self.parse_tastypage)

def parse_tastypage(self, response):
    hxs = HtmlXPathSelector(response)
    yum = hxs.select('//img')

    # etc.

If you look here, there's an example of a spider that authenticates before scraping.

In this case, it handles things in the parse function (the default callback of any request).

def parse(self, response):
    hxs = HtmlXPathSelector(response)
    if hxs.select("//form[@id='UsernameLoginForm_LoginForm']"):
        return self.login(response)
    else:
        return self.get_section_links(response)

So, whenever a request is made, the response is checked for the presence of the login form. If it is there, then we know that we need to login, so we call the relevant function, if it's not present, we call the function that is responsible for scraping the data from the response.

I hope this is clear, feel free to ask if you have any other questions!


Edit:

Okay, so you want to do more than just spawn a single request and scrape it. You want to follow links.

To do that, all you need to do is scrape the relevant links from the page, and spawn requests using those URLs. For example:

def parse_page(self, response):
    """ Scrape useful stuff from page, and spawn new requests

    """
    hxs = HtmlXPathSelector(response)
    images = hxs.select('//img')
    # .. do something with them
    links = hxs.select('//a/@href')

    # Yield a new request for each link we found
    for link in links:
        yield Request(url=link, callback=self.parse_page)

As you can see, it spawns a new request for every URL on the page, and each one of those requests will call this same function with their response, so we have some recursive scraping going on.

What I've written above is just an example. If you want to "crawl" pages, you should look into CrawlSpider rather than doing things manually.

daaawx
  • 3,273
  • 2
  • 17
  • 16
Acorn
  • 49,061
  • 27
  • 133
  • 172
  • Ok, so I'm using the `scrapy crawl` command to run this (Don't know if that matters). After login success, if I call `parse_tastypage`, it only parses that one page, and then exits. How do I tell it to follow all links and crawl that as well? – Herman Schaaf May 01 '11 at 20:04
  • Updated my answer to show an example of spawning multiple requests. – Acorn May 01 '11 at 20:16
  • Yes, I am actually using a `CrawlSpider` in my own code - how would I then do it differently? (without having to explicitly parse the links myself) – Herman Schaaf May 01 '11 at 20:17
  • Is there anything in particular that you don't understand about [the well commented example](http://doc.scrapy.org/topics/spiders.html#crawlspider-example) that you'd like me to explain? – Acorn May 01 '11 at 20:25
  • I posted a new question, that's a bit more specific than my first one - [Crawling with an authenticated session in Scrapy](http://stackoverflow.com/q/5851213/445210) – Herman Schaaf May 01 '11 at 20:35
  • Thanks! Any chance you can find the link to the demo spider with login handling page? – wrongusername Oct 03 '12 at 22:08
  • @wrongusername It was just a link to the example in the crawlspider documentation section: http://doc.scrapy.org/en/latest/topics/spiders.html#crawlspider-example – Acorn Oct 04 '12 at 13:22
  • ahhh I see. Thanks a lot Acorn! :) – wrongusername Oct 04 '12 at 14:58
  • you can take a look at https://github.com/scrapy/loginform, you can get all the form data auto use it. – gnemoug Sep 21 '15 at 14:54
  • @Acorn the link address is broken. – verystrongjoe Dec 20 '15 at 19:38
  • The correct address for the sample code is http://scrapy.readthedocs.io/en/latest/topics/spiders.html#crawlspider-example – donarb Jul 12 '16 at 01:46
  • link shared by @Acorn for example is awesome. but it is out dated version of scrapy. can anyone provide exmple of same with latest version of scrapy ? – Vishvajeet Ramanuj Jul 27 '20 at 12:00