0

As you know, we use this lua script when we try to use Scrapy Splash with Crawlera:

function use_crawlera(splash)
    -- Make sure you pass your Crawlera API key in the 'crawlera_user' arg.
    -- Have a look at the file spiders/quotes-js.py to see how to do it.
    -- Find your Crawlera credentials in https://app.scrapinghub.com/
    local user = splash.args.crawlera_user

    local host = 'proxy.crawlera.com'
    local port = 8010
    local session_header = 'X-Crawlera-Session'
    local session_id = 'create'

    splash:on_request(function (request)
        request:set_header('X-Crawlera-Cookies', 'disable')
        request:set_header(session_header, session_id)
        request:set_proxy{host, port, username=user, password=''}
    end)

    splash:on_response_headers(function (response)
        if type(response.headers[session_header]) ~= nil then
            session_id = response.headers[session_header]
        end
    end)
end

function main(splash)
    use_crawlera(splash)
        splash:init_cookies(splash.args.cookies)
        assert(splash:go{
            splash.args.url,
            headers=splash.args.headers,
            http_method=splash.args.http_method,
        })    
            assert(splash:wait(3))
        return {
            html = splash:html(),
            cookies = splash:get_cookies(),
        }
end

There is a session_id variable in that lua script which I need badly, but how can I access it from Scrapy's response?

I've tried response.session_id or response.headers['X-Crawlera-Session'] but both don't work.

Aminah Nuraini
  • 18,120
  • 8
  • 90
  • 108

2 Answers2

0

Use splash:set_result_header.

Gallaecio
  • 3,620
  • 2
  • 25
  • 64
0
  1. Return HAR data also (https://splash.readthedocs.io/en/stable/scripting-ref.html#splash-har) in your lua script:
    return {
        html = splash:html(),
        har = splash:har(),
        cookies = splash:get_cookies(),
    }
  1. Assuming you are using scrapy-splash (https://github.com/scrapy-plugins/scrapy-splash) make sure that you set the execute endpoint to your request:

meta['splash']['endpoint'] = 'execute'.

If you use scrapy.Request, render.json is the default endpoint, but for scrapy_splash.SplashRequest the default endpoint is render.html. Check out these 2 examples to see how to set the endpoint: https://github.com/scrapy-plugins/scrapy-splash#requests

  1. Only now you have access to X-Crawlera-Session header in your parse method:
    def parse(self, response):
        headers = json.loads(response.text)['har']['log']['entries'][0]['response']['headers']
        session_id = next(x for x in headers if x['name'] == 'X-Crawlera-Session')['value']
>>> headers = json.loads(response.text)['har']['log']['entries'][0]['response']['headers']
>>> next(x for x in headers if x['name'] == 'X-Crawlera-Session')
{u'name': u'X-Crawlera-Session', u'value': u'2124641382'}