1

I have about 1000 links of articles with related doi within a csv file and I need to perform the download of these papers.

I have tried in the following way:

List_dois = [""] #here I have inserted the list of 1000 doi

out = 'out_folder'
logging.basicConfig(filename='myapp.log', level=logging.INFO)
for doi in List_dois:
    try:
        SciHub(doi, out).download(choose_scihub_url_index=3)
        time.sleep(10)
    except:
        logging.info("Error!", sys.exc_info()[0], doi)

But after about 10 downloads, it generates the following error:

Traceback (most recent call last):
  File "/home/username/PycharmProjects/pythonProject3/main.py", line 65, in <module>
    sci = SciHub(doi, out).download(choose_scihub_url_index=3)
  File "/home/username/PycharmProjects/pythonProject3/venv/lib/python3.8/site-packages/scidownl/scihub.py", line 90, in download
    self.download_pdf(pdf)
  File "/home/username/PycharmProjects/pythonProject3/venv/lib/python3.8/site-packages/scidownl/scihub.py", line 147, in download_pdf
    if self.is_captcha_page(res):
  File "/home/usernamePycharmProjects/pythonProject3/venv/lib/python3.8/site-packages/scidownl/scihub.py", line 184, in is_captcha_page
    return 'must-revalidate' in res.headers['Cache-Control']
  File "/home/username/PycharmProjects/pythonProject3/venv/lib/python3.8/site-packages/requests/structures.py", line 54, in getitem
    return self._store[key.lower()][1]
KeyError: 'cache-control'

How could I solve the problem? I don't want to increase the sleep time too much, otherwise the operation would be really too long...

broke31
  • 9
  • 6

0 Answers0