0

I'm trying to put my scraped data on my firebase account on cloud , but i'm getting this ImportError when i run the spider. I tried making new project and even reinstalling the firebase and shub on specific version of Python but no help.

the spider runs perfectly on my machine , and doesn't show any ImportErrors. here is the error log.

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/scrapy/utils/defer.py", line 102, in iter_errback
    yield next(it)
  File "/usr/local/lib/python2.7/site-packages/sh_scrapy/middlewares.py", line 30, in process_spider_output
    for x in result:
  File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/offsite.py", line 29, in process_spider_output
    for x in result:
  File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/referer.py", line 339, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/app/__main__.egg/Terminator/spiders/IcyTermination.py", line 18, in parse
    from firebase import firebase
ImportError: No module named firebase

any help?

P.hunter
  • 1,345
  • 2
  • 21
  • 45

1 Answers1

1

I couldn't comment due to reputation. But have you created your requirements.txt?

Here you will find how to deploy your own dependencies to scrapinghub.

Basically you create a requirements.txt file at the root of your project with one dependency per line and add

requirements_file: requirements.txt

to your scrapinghub.yml file

Henrique Coura
  • 822
  • 7
  • 16
  • But isn't that requirement.txt is for the purpose of the error is in the deployment of the project? My project successfully deploys but gives this error while running it. – P.hunter Jul 04 '17 at 06:17
  • requirements.txt will inform scrapinghub which new packages it must install to run your spider. You might deploy successfully and still get import errors. If you have set the requirements.txt file let me see your scrapinghub.yml – Henrique Coura Jul 04 '17 at 16:34