1

I am learning about python and scraping and wrote my first spider using Scrapy. It works fine when I run it locally to scrape my test site it works fine. I deployed the project on my remote server in Scrapyd but when I schedule the spider to run against the test site, it always returns a 503. At first I thought perhaps my server IP was firewalled but I've since changed IP. I tried spoofing my user agent to mimic one of my browsers, but to no effect. I'm not sure what else I can try as this is new territory to me but any pointer would be appreciated. Edit: The relevant output from the spider log:

2017-07-28 18:49:45 [scrapy.core.engine] INFO: Spider opened
2017-07-28 18:49:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-07-28 18:49:45 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-07-28 18:49:45 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://pieces-auto.oscaro.com/batterie-2585-g> (failed 1 times): 503 Service Unavailable
2017-07-28 18:49:45 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://pieces-auto.oscaro.com/batterie-2585-g> (failed 2 times): 503 Service Unavailable
2017-07-28 18:49:45 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET https://pieces-auto.oscaro.com/batterie-2585-g> (failed 3 times): 503 Service Unavailable
2017-07-28 18:49:45 [scrapy.core.engine] DEBUG: Crawled (503) <GET https://pieces-auto.oscaro.com/batterie-2585-g> (referer: None)
2017-07-28 18:49:45 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <503 https://pieces-auto.oscaro.com/batterie-2585-g>: HTTP status code is not handled or not allowed
2017-07-28 18:49:45 [scrapy.core.engine] INFO: Closing spider (finished)
2017-07-28 18:49:45 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 829,
 'downloader/request_count': 3,
 'downloader/request_method_count/GET': 3,
 'downloader/response_bytes': 62411,
 'downloader/response_count': 3,
 'downloader/response_status_count/503': 3,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 7, 28, 16, 49, 45, 953027),
 'httperror/response_ignored_count': 1,
 'httperror/response_ignored_status_count/503': 1,
 'log_count/DEBUG': 5,
 'log_count/INFO': 9,
 'memusage/max': 81936384,
 'memusage/startup': 81936384,
 'response_received_count': 1,
 'retry/count': 2,
 'retry/max_reached': 1,
 'retry/reason_count/503 Service Unavailable': 2,
 'scheduler/dequeued': 3,
 'scheduler/dequeued/memory': 3,
 'scheduler/enqueued': 3,
 'scheduler/enqueued/memory': 3,
 'start_time': datetime.datetime(2017, 7, 28, 16, 49, 45, 639361)}
Dark Star1
  • 6,986
  • 16
  • 73
  • 121
  • Any output from Scrapyd? /var/log/scrapyd/scrapyd.log ? – Dan-Dev Jul 30 '17 at 11:16
  • @Dan-Dev forgot to add that. Apologies. – Dark Star1 Jul 30 '17 at 11:23
  • can you run "curl -vv https://pieces-auto.oscaro.com/batterie-2585-g" from your remote server and get a response? (the editor took out the protocol) – Dan-Dev Jul 30 '17 at 11:29
  • @Dan-Dev Here is the output of that curl command: https://paste.fedoraproject.org/paste/smNzIPuno8N5cDN1p18zPw it's too long to paste in comments. – Dark Star1 Jul 30 '17 at 11:39
  • It looks like some sort of rate limiting and or browser checking on the web-server, I guess the headers on the curl output showed a 503. What you pasted was different to what I got from my machine with curl which makes me think rate limiting see https://doc.scrapy.org/en/latest/topics/practices.html#avoiding-getting-banned in the docs – Dan-Dev Jul 30 '17 at 11:50
  • @Dan-Dev I also get a different response curling from my local machine. What I pasted is from the actual vps server. – Dark Star1 Jul 30 '17 at 12:00
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/150522/discussion-between-dan-dev-and-dark-star1). – Dan-Dev Jul 30 '17 at 12:03

0 Answers0