I was trying to crawl this page using python-requests library
import requests
from lxml import etree,html
url = 'http://www.amazon.in/b/ref=sa_menu_mobile_elec_all?ie=UTF8&node=976419031'
r = requests.get(url)
tree = etree.HTML(r.text)
print tree
but I got above error. (TooManyRedirects)
I tried to use allow_redirects
parameter but same error
r = requests.get(url, allow_redirects=True)
I even tried to send headers and data alongwith url but I'm not sure if this is correct way to do it.
headers = {'content-type': 'text/html'}
payload = {'ie':'UTF8','node':'976419031'}
r = requests.post(url,data=payload,headers=headers,allow_redirects=True)
how to resolve this error. I've even tried beautiful-soup4 out of curiosity and I got different but same kind of error
page = BeautifulSoup(urllib2.urlopen(url))
urllib2.HTTPError: HTTP Error 301: The HTTP server returned a redirect error that would lead to an infinite loop.
The last 30x error message was:
Moved Permanently