0

I am trying to scrape this url:

https://www.bloomberg.com/news/articles/2019-06-03/a-tesla-collapse-would-boost-european-carmakers-bernstein-says

I just wanted to scrape title and posted date only but bloomberg always banned man and think that I am robot

Sample Response that I've received:

<!doctype html>
<html>
<head>
<title>Bloomberg - Are you a robot?</title>
<meta name="viewport" content="width=device-width, initial-scale=1">

Any idea how can I make the website believe that the request is coming from a browser using Scrapy?

This is what I've done so far

  def parse(self, response):
        yield scrapy.Request('https://www.bloomberg.com/news/articles/2019-05-30/tesla-dealt-another-blow-as-barclays-sees-it-as-niche-carmaker',
        headers={'X-Crawlera-Session': 'create',
                'Referrer': "https://www.bloomberg.com/news/articles/2019-05-30/tesla-dealt-another-blow-as-barclays-sees-it-as-niche-carmaker",
                'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
                'accept-language': 'en-US,en;q=0.9,fr;q=0.8,ro;q=0.7,ru;q=0.6,la;q=0.5,pt;q=0.4,de;q=0.3',
                'cache-control': 'max-age=0',
                'upgrade-insecure-requests': '1',
                'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'
        },

 def parse_sub(self, response):
     print(response.text)

I also use crawlera as well and I added it on settings.py

 DOWNLOADER_MIDDLEWARES = {'scrapy_crawlera.CrawleraMiddleware': 300}
 CONCURRENT_REQUESTS = 32
 CONCURRENT_REQUESTS_PER_DOMAIN = 32
 AUTOTHROTTLE_ENABLED = False
 DOWNLOAD_TIMEOUT = 600
 CRAWLERA_APIKEY = 'API_KEY'

Please help me thank you

1 Answers1

0

You need to use headers, mainly to specify a User-Agent which tells the website general information about the browser and device. There is a massive User-Agent List on GitHub if you need help finding one.

You can specify headers for a specific request like this:

yield Request(parse=..., headers={"User-Agent":"user_agent", "Referrer":"url_here", etc.})
Aero Blue
  • 518
  • 2
  • 14