I have this code:
import urllib
from bs4 import BeautifulSoup
base_url='https://en.wikipedia.org'
start_url='https://en.wikipedia.org/wiki/Computer_programming'
outfile_name='Computer_programming.csv'
no_of_links=10
fp=open(outfile_name, 'wb')
def get_links(link):
html = urllib.urlopen(link).read()
soup = BeautifulSoup(html, "lxml")
ret_list=soup.select('p a[href]')
count=0
ret=[]
for tag in ret_list:
link=tag['href']
if link[0]=='/' and ':' not in link and link[:5]=='/wiki' and '#' not in link:
ret.append(base_url+link)
count=count+1
if count==no_of_links:
return ret
l1=get_links(start_url)
for link in l1:
fp.write('%s;%s\n'%(start_url,link))
for link1 in l1:
l2=get_links(link1)
for link in l2:
fp.write('%s;%s\n'%(link1,link))
for link2 in l2:
l3=get_links(link2)
for link in l3:
fp.write('%s;%s\n'%(link2,link))
fp.close()
is saves an neighborhood of nodes in an csv file. But when I try to run it I'm getting this error:
for link in l3:
TypeError: 'NoneType' object is not iterable
I get the same error when I trying to run the code for another Wikipedia link, like https://en.wikipedia.org/wiki/Technology. The only page on which it works is: https://en.wikipedia.org/wiki/Computer_science. And that's a problem since I need to collect the data on more sites not only the Computer science one.
Can anyone give me a hint how to deal with it??
Thanks a lot.