some websites automatically decline requests due to lack of user-agent, and it's a hassle using bs4 to scrape many different types of tables.
This issue was resolved before through this code:
url = 'http://finance.yahoo.com/quote/A/key-statistics?p=A'
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
response = opener.open(url)
tables = pd.read_html(response.read()
However urllib2 has been depreciated and urllib3 doesn't have a build_opener() attribute, and I could not find an equivalent attribute either even though I'm sure it has one.