0

By default, when you make a request, the body of the response is downloaded immediately. You can override this behaviour in Requests module and defer downloading the response body until you access the Response.content attribute with the stream parameter, described here http://docs.python-requests.org/en/latest/user/advanced/#body-content-workflow

I wonder if pycurl can do the same thing.

What I want to do here is to get the direct url of a few URLs having redirections ,it would be better if this could be done asynchronously .You may see I could use HEAD to do this, but the server I am sending request to seems doesn't support HEAD, so this seems the only way I can go ,can I use pycurl do the same thing here ?

I have no experience in pycurl, as deadline of my project approaches, it would be better to show some code, thanks!!!

iMath
  • 2,326
  • 2
  • 43
  • 75
  • 1
    Why not use `requests`? – Thomas Orozco Dec 15 '14 at 09:39
  • @ThomasOrozco there are 30s URLs, perhaps requests cannot get all the direct urls of the URLs in a certain time,so the SERVER closed the connections prematurely,thus requests always raise a ConnectionError exception in my situation. – iMath Dec 15 '14 at 11:47
  • Bear in mind that requests is one of Python's most popular third-party libraries — it's somewhat unlikely you're in such a unique situation that you can't use it (especially if you're not sure why). – Thomas Orozco Dec 15 '14 at 11:49

0 Answers0