12

How do I pass parameters to a a request on a url like this:

site.com/search/?action=search&description=My Search here&e_author=

How do I put the arguments on the structure of a Spider Request, something like this exemple:

req = Request(url="site.com/",parameters={x=1,y=2,z=3})
eLRuLL
  • 18,488
  • 9
  • 73
  • 99
Gh057
  • 137
  • 1
  • 1
  • 4

5 Answers5

17

Pass your GET parameters inside the URL itself:

return Request(url="https://yoursite.com/search/?action=search&description=MySearchhere&e_author=")

You should probably define your parameters in a dictionary and then "urlencode" it:

from urllib.parse import urlencode

params = { 
    "action": "search",
    "description": "My search here",
    "e_author": ""
}
url = "https://yoursite.com/search/?" + urlencode(params)

return Request(url=url)
alecxe
  • 462,703
  • 120
  • 1,088
  • 1,195
7

To create GET request with params, using scrapy, you can use the following example:

yield scrapy.FormRequest(
    url=url,
    method='GET',
    formdata=params,
    callback=self.parse_result
)

where 'params' is a dict with your parameters.

Roman
  • 1,883
  • 2
  • 14
  • 26
  • Great! It even supports setting variables multiple times, using either `{'a': ['val1', 'val2']}` or, I think, `('a', 'val1'), ('a', 'val2')`. https://docs.scrapy.org/en/latest/topics/request-response.html#formrequest-objects – thomasa88 Jun 12 '20 at 04:09
  • 1
    This answer avoids the use of external libraries, for me, the best. – Alejo Bernardin Jan 18 '22 at 02:12
3

You have to make url yourself with whatever parameters you have.

Python 3 or above

import urllib
params = {
    'key': self.access_key,
    'part': 'snippet,replies',
    'videoId': self.video_id,
    'maxResults': 100
}
url = f'https://www.googleapis.com/youtube/v3/commentThreads/?{urllib.parse.urlencode(params)}'
request = scrapy.Request(url, callback=self.parse)
yield request

Python 3+ example
Here I am trying to fetch all reviews for some youtube video using official youtube api. Reviews will come in paginated format. So see how I am constructing url from params to call it.

import scrapy
import urllib
import json
import datetime
from youtube_scrapy.items import YoutubeItem

class YoutubeSpider(scrapy.Spider):
    name = 'youtube'
    BASE_URL = 'https://www.googleapis.com/youtube/v3'

    def __init__(self):
        self.access_key = 'you_yuotube_api_access_key'
        self.video_id = 'any_youtube_video_id'

    def start_requests(self):
        params = {
            'key': self.access_key,
            'part': 'snippet,replies',
            'videoId': self.video_id,
            'maxResults': 100
        }
        url = f'{self.BASE_URL}/commentThreads/?{urllib.parse.urlencode(params)}'
        request = scrapy.Request(url, callback=self.parse)
        request.meta['params'] = params
        return [request]

    def parse(self, response):
        data = json.loads(response.body)

        # lets collect comment and reply
        items = data.get('items', [])
        for item in items:
            created_date = item['snippet']['topLevelComment']['snippet']['publishedAt']
            _created_date = datetime.datetime.strptime(created_date, '%Y-%m-%dT%H:%M:%S.000Z')
            id = item['snippet']['topLevelComment']['id']
            record = {
                'created_date': _created_date,
                'body': item['snippet']['topLevelComment']['snippet']['textOriginal'],
                'creator_name': item['snippet']['topLevelComment']['snippet'].get('authorDisplayName', {}),
                'id': id,
                'url': f'https://www.youtube.com/watch?v={self.video_id}&lc={id}',
            }

            yield YoutubeItem(**record)


        # lets paginate if next page is available for more comments
        next_page_token = data.get('nextPageToken', None)
        if next_page_token:
            params = response.meta['params']
            params['pageToken'] = next_page_token
            url = f'{self.BASE_URL}/commentThreads/?{urllib.parse.urlencode(params)}'
            request = scrapy.Request(url, callback=self.parse)
            request.meta['params'] = params
            yield request
Alok
  • 7,734
  • 8
  • 55
  • 100
1

Can use add_or_replace_parameters from w3lib.

from w3lib.url import add_or_replace_parameters

def abc(self, response):
  url = "https://yoursite.com/search/" # can be response.url or any
  params = { 
      "action": "search",
      "description": "My search here",
      "e_author": ""
  }

  return Request(url=add_or_replace_parameters(url, prams))
Dejavu
  • 31
  • 5
-1

Scrapy doesn't offer this directly. What you are trying to do is to create a url, for which you can use the urlparse module

eLRuLL
  • 18,488
  • 9
  • 73
  • 99