I am using Python 2.7 Anaconda.
I have used the Wikipedia Python package to extract a list of article titles:
titles = wikipedia.random(pages=1000).decode('utf-8')
titles_encoded = [x.encode('utf-8') for x in titles]
Is there a way of using
wikipedia.summary(title=titles_encoded,auto_suggest=True, redirect=True).encode('utf-8')
in order to extract multiple articles at once? I have used a for loop but it takes really long:
for n in range(1,500):
test[n] = wikipedia.summary(title=titles_encoded[n],auto_suggest=True, redirect=True).encode('utf-8')
print(n,"text extracted")
I am looking for a solution that is more efficient/faster.