I want to crawl around 500 articles from the site AlJazeera Website and want to collect 4 tags i.e
- URL
- Title
- Tags
- Author
I have written the script that collects data from home page, but it only collects couple of articles. Other articles are in different categories. How can I iterate through 500 articles. Is there an efficient way to do it.
import bs4
import pandas as pd
from bs4 import BeautifulSoup
import requests
from collections import Counter
page = requests.get('https://www.aljazeera.com/')
soup = BeautifulSoup(page.content,"html.parser")
article = soup.find(id='more-top-stories')
inside_articles= article.find_all(class_='mts-article mts-default-article')
article_title = [inside_articles.find(class_='mts-article-title').get_text() for inside_articles in inside_articles]
article_dec = [inside_articles.find(class_='mts-article-p').get_text() for inside_articles in inside_articles]
tag = [inside_articles.find(class_='mts-category').get_text() for inside_articles in inside_articles]
link = [inside_articles.find(class_='mts-article-title').find('a') for inside_articles in inside_articles]