4

I want to scrape all the subcategories and pages under the category header of the Category page: "Category:Computer science". The link for the same is as follows: http://en.wikipedia.org/wiki/Category:Computer_science.

I have got an idea regarding the above mentioned problem, from the following stack overflow answer which is specified in the following link. Pythonic beautifulSoup4 : How to get remaining titles from the next page link of a wikipedia category and How to scrape Subcategories and pages in categories of a Category wikipedia page using Python

However, the answer do not fully solves the problem. It only scrapes the Pages in category "Computer science". But, I want to extract all the subcategories names and its associated pages. I want the process should report the results in BFS manner with a depth of 10. Is there exist any way to do this?

I found the following code from this linked post:

from pprint import pprint
from urllib.parse import urljoin

from bs4 import BeautifulSoup
import requests


base_url = 'https://en.wikipedia.org/wiki/Category:Computer science'


def get_next_link(soup):
    return soup.find("a", text="next page")

def extract_links(soup):
    return [a['title'] for a in soup.select("#mw-pages li a")]


with requests.Session() as session:
    content = session.get(base_url).content
    soup = BeautifulSoup(content, 'lxml')

    links = extract_links(soup)
    next_link = get_next_link(soup)
    while next_link is not None:  # while there is a Next Page link
        url = urljoin(base_url, next_link['href'])
        content = session.get(url).content
        soup = BeautifulSoup(content, 'lxml')

        links += extract_links(soup)

        next_link = get_next_link(soup)

pprint(links)
darthbith
  • 18,484
  • 9
  • 60
  • 76
M S
  • 894
  • 1
  • 13
  • 41

1 Answers1

2

To scrape the subcategories, you will have to use selenium to interact with the dropdowns. A simple traversal over the second category of links will yield the pages, however, to find all the subcategories, recursion is needed to properly group the data. The code below utilizes a simple variant of the breadth-first search to determine when to stop looping over the dropdown toggle objects generated at each iteration of the while loop:

from selenium import webdriver
import time
from bs4 import BeautifulSoup as soup 
def block_data(_d):
  return {_d.find('h3').text:[[i.a.attrs.get('title'), i.a.attrs.get('href')] for i in _d.find('ul').find_all('li')]}

def get_pages(source:str) -> dict:
  return [block_data(i) for i in soup(source, 'html.parser').find('div', {'id':'mw-pages'}).find_all('div', {'class':'mw-category-group'})]

d = webdriver.Chrome('/path/to/chromedriver')
d.get('https://en.wikipedia.org/wiki/Category:Computer_science')
all_pages = get_pages(d.page_source)
_seen_categories = []
def get_categories(source):
  return [[i['href'], i.text] for i in soup(source, 'html.parser').find_all('a', {'class':'CategoryTreeLabel'})]

def total_depth(c):
  return sum(1 if len(b) ==1 and not b[0] else sum([total_depth(i) for i in b]) for a, b in c.items())

def group_categories(source) -> dict:
  return {i.find('div', {'class':'CategoryTreeItem'}).a.text:(lambda x:None if not x else [group_categories(c) for c in x])(i.find_all('div', {'class':'CategoryTreeChildren'})) for i in source.find_all('div', {'class':'CategoryTreeSection'})}

while True:
  full_dict = group_categories(soup(d.page_source, 'html.parser'))
  flag = False
  for i in d.find_elements_by_class_name('CategoryTreeToggle'):
     try:
       if i.get_attribute('data-ct-title') not in _seen_categories:
          i.click()
          flag = True
          time.sleep(1)
     except:
       pass
     else:
       _seen_categories.append(i.get_attribute('data-ct-title'))
  if not flag:
     break

Output:

all_pages:

[{'\xa0': [['Computer science', '/wiki/Computer_science'], ['Glossary of computer science', '/wiki/Glossary_of_computer_science'], ['Outline of computer science', '/wiki/Outline_of_computer_science']]}, 
 {'B': [['Patrick Baudisch', '/wiki/Patrick_Baudisch'], ['Boolean', '/wiki/Boolean'], ['Business software', '/wiki/Business_software']]}, 
 {'C': [['Nigel A. L. Clarke', '/wiki/Nigel_A._L._Clarke'], ['CLEVER score', '/wiki/CLEVER_score'], ['Computational human modeling', '/wiki/Computational_human_modeling'], ['Computational social choice', '/wiki/Computational_social_choice'], ['Computer engineering', '/wiki/Computer_engineering'], ['Critical code studies', '/wiki/Critical_code_studies']]}, 
  {'I': [['Information and computer science', '/wiki/Information_and_computer_science'], ['Instance selection', '/wiki/Instance_selection'], ['Internet Research (journal)', '/wiki/Internet_Research_(journal)']]}, 
  {'J': [['Jaro–Winkler distance', '/wiki/Jaro%E2%80%93Winkler_distance'], ['User:JUehV/sandbox', '/wiki/User:JUehV/sandbox']]}, 
  {'K': [['Krauss matching wildcards algorithm', '/wiki/Krauss_matching_wildcards_algorithm']]}, 
  {'L': [['Lempel-Ziv complexity', '/wiki/Lempel-Ziv_complexity'], ['Literal (computer programming)', '/wiki/Literal_(computer_programming)']]}, 
  {'M': [['Machine learning in bioinformatics', '/wiki/Machine_learning_in_bioinformatics'], ['Matching wildcards', '/wiki/Matching_wildcards'], ['Sidney Michaelson', '/wiki/Sidney_Michaelson']]}, 
  {'N': [['Nuclear computation', '/wiki/Nuclear_computation']]}, {'O': [['OpenCV', '/wiki/OpenCV']]}, 
  {'P': [['Philosophy of computer science', '/wiki/Philosophy_of_computer_science'], ['Prefetching', '/wiki/Prefetching'], ['Programmer', '/wiki/Programmer']]}, 
  {'Q': [['Quaject', '/wiki/Quaject'], ['Quantum image processing', '/wiki/Quantum_image_processing']]}, 
  {'R': [['Reduction Operator', '/wiki/Reduction_Operator']]}, {'S': [['Social cloud computing', '/wiki/Social_cloud_computing'], ['Software', '/wiki/Software'], ['Computer science in sport', '/wiki/Computer_science_in_sport'], ['Supnick matrix', '/wiki/Supnick_matrix'], ['Symbolic execution', '/wiki/Symbolic_execution']]}, 
  {'T': [['Technology transfer in computer science', '/wiki/Technology_transfer_in_computer_science'], ['Trace Cache', '/wiki/Trace_Cache'], ['Transition (computer science)', '/wiki/Transition_(computer_science)']]}, 
  {'V': [['Viola–Jones object detection framework', '/wiki/Viola%E2%80%93Jones_object_detection_framework'], ['Virtual environment', '/wiki/Virtual_environment'], ['Visual computing', '/wiki/Visual_computing']]}, 
  {'W': [['Wiener connector', '/wiki/Wiener_connector']]}, 
  {'Z': [['Wojciech Zaremba', '/wiki/Wojciech_Zaremba']]}, 
  {'Ρ': [['Portal:Computer science', '/wiki/Portal:Computer_science']]}]

full_dict is quite large, and due to its size I am unable to post it entirely here, however, below is an implementation of a function to traverse the structure and select all the elements down to a depth of ten:

def trim_data(d, depth, count):
   return {a:None if count == depth else [trim_data(i, depth, count+1) for i in b] for a, b in d.items()}

final_subcategories = trim_data(full_dict, 10, 0)

Edit: script to remove leaves from tree:

def remove_empty_children(d):
  return {a:None if len(b) == 1 and not b[0] else 
     [remove_empty_children(i) for i in b if i] for a, b in d.items()}

When running the above:

c = {'Areas of computer science': [{'Algorithms and data structures': [{'Abstract data types': [{'Priority queues': [{'Heaps (data structures)': [{}]}, {}], 'Heaps (data structures)': [{}]}]}]}]}
d = remove_empty_children(c)

Output:

{'Areas of computer science': [{'Algorithms and data structures': [{'Abstract data types': [{'Priority queues': [{'Heaps (data structures)': None}], 'Heaps (data structures)': None}]}]}]}

Edit 2: flattening the entire structure:

def flatten_groups(d):
   for a, b in d.items():
     yield a
     if b is not None:
        for i in map(flatten_groups, b):
           yield from i


print(list(flatten_groups(remove_empty_children(c))))

Output:

['Areas of computer science', 'Algorithms and data structures', 'Abstract data types', 'Priority queues', 'Heaps (data structures)', 'Heaps (data structures)']

Edit 3:

To access all the pages for every subcategory to a certain level, the original get_pages function can be utilized and a slightly different version of the group_categories method

def _group_categories(source) -> dict:
  return {i.find('div', {'class':'CategoryTreeItem'}).find('a')['href']:(lambda x:None if not x else [group_categories(c) for c in x])(i.find_all('div', {'class':'CategoryTreeChildren'})) for i in source.find_all('div', {'class':'CategoryTreeSection'})}
from collections import namedtuple
page = namedtuple('page', ['pages', 'children'])
def subcategory_pages(d, depth, current = 0):
  r = {} 
  for a, b in d.items():
     all_pages_listing = get_pages(requests.get(f'https://en.wikipedia.org{a}').text)
     print(f'page number for {a}: {len(all_pages_listing)}')
     r[a] = page(all_pages_listing, None if current==depth else [subcategory_pages(i, depth, current+1) for i in b])
  return r


print(subcategory_pages(full_dict, 2))

Please note that in order to utilize subcategory_pages, _group_categories must be used in place of group_categories.

Ajax1234
  • 69,937
  • 8
  • 61
  • 102
  • Awesome script @Ajax1234!! Provided a plus. However, why did you use a browser simulator instead of xhr? Are the links generated dynamically? – SIM Oct 06 '18 at 22:09
  • 2
    @SIM Thank you. I used `selenium` merely because I have some familiarity with it, although I am sure `xhr` would work as well, and also because the OP needs the structure of the subcategories which on a Wikipedia page can only be accessed by toggling a button to the left of each `a` tag. `selenium` is not needed to gather all the page links (simple `requests` would do), although I had already created a `selenium` browser object, and had access to the page source. – Ajax1234 Oct 07 '18 at 00:18
  • @Ajax1234. really very superb and powerful script. – M S Oct 07 '18 at 10:25
  • @Ajax1234. I have a query. Suppose I am printing the `final_subcategories` for a depth of 5. I am getting the following output `'Areas of computer science': [{'Algorithms and data structures': [{'Abstract data types': [{'Priority queues': [{'Heaps (data structures)': [{}]}, {}], 'Heaps (data structures)': [{}]},...` What are `[{}]}, {}]` actually? And how to exclude this special characters, this is because I need only the title names? – M S Oct 07 '18 at 13:45
  • 1
    @MishraSiba `[{}]` is an [empty node](https://en.wikipedia.org/wiki/Node_(computer_science)) i.e referring to a subcategory with no children. Please see my recent edit, as I wrote a function to remove the empty dictionaries and single-element lists. – Ajax1234 Oct 07 '18 at 14:51
  • Thanks a lot. The output shown in above, `{'Areas of computer science': [{'Algorithms and data structures': [{'Abstract data types': [{'Priority queues': [{'Heaps (data structures)': None}], 'Heaps (data structures)': None}]}]}]}` looks like this. Its Fine. Suppose, if I want the entire output (only the names) in a single list, as a collection of strings, something like this `{'Areas of computer science', 'Algorithms and data structures', 'Abstract data types', 'Priority queues', 'Heaps (data structures)', 'Heaps (data structures)', ' ', ' ', ...}`. Does there exist any way to achieve this? – M S Oct 07 '18 at 15:33
  • Great. Output is perfect Now. Thanks a lot. I am trying to execute the same. – M S Oct 07 '18 at 15:39
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/181434/discussion-between-mishra-siba-and-ajax1234). – M S Oct 07 '18 at 15:44
  • @Ajax1234! Thank you for the post in Edit-3 ! What is `raise InvalidSchema("No connection adapters were found for '%s'" % url) InvalidSchema: No connection adapters were found for '/wiki/Category:Areas_of_computer_science'` with respect to edit-3 of the original post! – M S Oct 09 '18 at 16:59
  • 1
    @MishraSiba Ops, please see my recent edit on "edit 3" – Ajax1234 Oct 09 '18 at 17:05
  • Thanks a lot Ajax! It helps. Thanks a ton. – M S Oct 09 '18 at 17:59