I'm writing a script to download videos from a website, using urlretrieve
. My internet connection is somewhat erratic and goes down now and then.
When my network fails, urlretrieve
hangs, and doesn't pass control back to my program so I can handle the error.
How do I go about solving this problem?
Or should I use a different library for this purpose? If so, which is the best one (considering all the other features of urllib
are more than sufficient for my use and the files I download are around 500 - 600 MB)?
Asked
Active
Viewed 342 times
0

SvbZ3r0
- 638
- 4
- 19
-
1Use the requests lib, stream and write the data or just write the .content to a file catching any error with a try except. You can also set a timeout – Padraic Cunningham Jun 15 '16 at 10:56
-
@PadraicCunningham [This](http://docs.python-requests.org/en/master/) one? I'll look into it. Thanks – SvbZ3r0 Jun 15 '16 at 10:58
-
Yep, you won't find better – Padraic Cunningham Jun 15 '16 at 11:01
1 Answers
1
Use the requests library. Requests will throw a ConnectionError
exception when problems with the Network arise. Refer to this stackoverflow thread for how to go about downloading large files using requests.
If you're annoyed by the download starting all over again once the exception arises, look into the HTTP Range
header, with which you'll be able to resume the download (provided you're saving the bytes already retreived somewhere in your exception handling code.)

Community
- 1
- 1

Georg Grab
- 2,271
- 1
- 18
- 28
-
I was worried cuz my current program support download resumption. I'll take look into `requests`. But I'm going to have to rewrite my entire code. – SvbZ3r0 Jun 15 '16 at 11:03
-
Thanks a lot. I got it working. Completely switched my code from `urllib` to `requests`. – SvbZ3r0 Jun 16 '16 at 10:37