1

I am new to scrapy and trying to submit a form and scrape the response from https://www.fbo.gov/index?s=opportunity&tab=search&mode=list.

When I use the scrapy shell:

scrapy shell "https://www.fbo.gov/index?s=opportunity&tab=search&mode=list"

it opens up the shell but contains no response object. Running

print(response)

returns none. I've tried using just "https://www.fbo.gov" and other variations but nothing seems to work. The example I followed used "http://quotes.toscrape.com/page/1/" and it works fine.

Why do I get no response when using a different URL? Does it have to do with the https? Do I need to use a FormRequest to get an response since the link contains a form? I figured it would at least return the html of the form. I plan to 'check' various checkboxes upon submit.

Thanks in advance for any help!

Log:

2017-08-09 21:45:43 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: fbg)
2017-08-09 21:45:43 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'fbg.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['fbg.spiders'], 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter', 'BOT_NAME': 'fbg', 'LOGSTATS_INTERVAL': 0}
2017-08-09 21:45:44 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole']
2017-08-09 21:45:44 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-08-09 21:45:45 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-08-09 21:45:45 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-08-09 21:45:45 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-08-09 21:45:45 [scrapy.core.engine] INFO: Spider opened
2017-08-09 21:45:45 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.fbo.gov/robots.txt> (referer: None)
2017-08-09 21:45:45 [scrapy.downloadermiddlewares.robotstxt] DEBUG: Forbidden by robots.txt: <GET https://www.fbo.gov/index?s=opportunity&tab=search&mode=list>
[s] Available Scrapy objects:
[s]   scrapy     scrapy module (contains scrapy.Request, scrapy.Selector, etc)
[s]   crawler    <scrapy.crawler.Crawler object at 0x1101058d0>
[s]   item       {}
[s]   request    <GET https://www.fbo.gov/index?s=opportunity&tab=search&mode=list>
[s]   settings   <scrapy.settings.Settings object at 0x1101059e8>
[s] Useful shortcuts:
[s]   fetch(url[, redirect=True]) Fetch URL and update local objects (by default, redirects are followed)
[s]   fetch(req)                  Fetch a scrapy.Request and update local objects 
[s]   shelp()           Shell help (print this help)
[s]   view(response)    View response in a browser
Ryan Gedwill
  • 65
  • 1
  • 2
  • 9
  • I have both of the websites working fine on my end. Could you post the whole log. You can do that via `scrapy shell someurl 2>1 | tee output.log` if you're on unix machine. – Granitosaurus Aug 10 '17 at 04:13
  • @Granitosaurus just edited to show log – Ryan Gedwill Aug 10 '17 at 04:47
  • @Granitosaurus Opening a fresh terminal fixed my problem. Is that just the way it has to be done? – Ryan Gedwill Aug 10 '17 at 04:53
  • @Granitosaurus as a sidenote, running your command for dumping the log dumped it to a file called 1 and the output.log file has '>>>' and nothing else. I'm assuming that wasn't planned? – Ryan Gedwill Aug 10 '17 at 05:08
  • It should work fine if you are running on unix and have `tee` command. What `2>1` does is redirect stderr to stdout and `| tee ouput.log` redirects output to both stdout and a file (it's like a T turn). I don't see wy would it output what it did on your end, maybe a typo? – Granitosaurus Aug 10 '17 at 05:28
  • Yeah must be, I'll come back here to update if it comes up again. – Ryan Gedwill Aug 10 '17 at 05:51

1 Answers1

1

Your log says:

2017-08-09 21:45:45 [scrapy.downloadermiddlewares.robotstxt] DEBUG: Forbidden by robots.txt: <GET https://www.fbo.gov/index?s=opportunity&tab=search&mode=list>

Seems like you have setting ROBOTSTXT_ENABLED set to True so your request is getting filtered out. Try either disabling it in your project or running scrapy shell url -s ROBOTSTXT_ENABLED=0

The reason it worked when you "opened a new terminal" is that you probably started shell from non-project directory and scrapy no longer was picking up this setting from your project.

Granitosaurus
  • 20,530
  • 5
  • 57
  • 82
  • Yep that worked. Thank you! What exactly does changing that variable do? – Ryan Gedwill Aug 10 '17 at 05:01
  • `ROBOTSTXT_ENABLED` is a setting for making your spider follow `robots.txt` file instructions. Most websites have it and it can be accessed through domain.com/robots.txt page. It contains instructions of what can and cannot web-crawlers crawl. Of course often the "cannot" bit is the whole website so people rarely pay attention to these rules. – Granitosaurus Aug 10 '17 at 05:04