0

I've checked a lot of asks/answers, but i can get my solution. This snippet works perfect before, but 2 days ago, it insists to throw error as below:

Traceback (most recent call last):

  File "D:\Python\experiments\20210213.py", line 85, in <module>

    ws['B1'] = get_titles[loop_excel].contents[0].strip()

TypeError: 'NoneType' object is not callable

1, I make the soup using:

soup = BeautifulSoup(html_text, 'html.parser')

2, then get the titles:

  get_titles = soup.find_all('li', class_='chapter')

3, loop to insert the titles into excel:

wb = load_workbook('20210209.xlsx')
time.sleep(5)
ws = wb['Sheet1']
loop_excel = 0
for get_download in get_downloads_soup:
    ws.insert_rows(1, amount=1)
    ws['A1'] = datetime.now(pytz.timezone('Asia/Hong_Kong')).strftime("%Y-%m-%d %H:%M:%S")
    ws['B1'] = get_titles_s[loop_excel].contents[0].strip()
    
    loop_excel = loop_excel + 1

I 've tried to use print to identify the type and length of get_titles/contents, etc, it works,

<class 'bs4.element.ResultSet'> 66

So, I really don't know why it pops this error?

whole thing looks like:

options = Options()
options.binary_location = r'C:\Program Files\Mozilla Firefox\firefox.exe'
driver = webdriver.Firefox(executable_path=r'C:\Users\louis\geckodriver.exe')
time.sleep(14)

# continue
articles_total_number = 333
page_number_start = 1
loop_download_first_page = 0
downloaded_number = 0

urlwhole = (f'blahblah')
driver.get(urlwhole) 
WebDriverWait(driver,100).until(EC.visibility_of_element_located((By.CLASS_NAME,'chapters')))
time.sleep(30)

# supplementary first run
for page_number in range(page_number_start,page_number_start + 1):

    #preparation
    time.sleep(5)
    get_downloads_selenium = 
    driver.find_elements_by_css_selector('input.downloads') #200 resultset of selenium FirefoxWebElements
    time.sleep(5)
    html_text = driver.page_source
    time.sleep(15)
    soup = BeautifulSoup(html_text, 'html.parser')
    time.sleep(15)
    get_titles = soup.find_all('li', class_='chapters')
    time.sleep(15)
    get_chapternumbers = soup.find_all('a', class_="srch-result") # 200 resultset of bs element
    print(type(get_chapternumbers), len(get_chapternumbers))
    get_downloads_soup = soup.find_all('input', class_='downloads')# 200 resultset of bs element
    print(type(get_downloads_soup),len(get_downloads_soup))
    time.sleep(25)


   # record info by excel
   wb = load_workbook('20210209.xlsx')
   time.sleep(5)
   ws = wb['Sheet1']
   loop_excel = 0
   for get_download in get_downloads_soup:
        ws.insert_rows(1, amount=1)
        ws['A1'] = datetime.now(pytz.timezone('Asia/Hong_Kong')).strftime("%Y-%m-%d %H:%M:%S")
        ws['B1'] = get_titles[loop_excel].contents[0].strip()
        ws['C1'] = get_chapternumbers[loop_excel].get_text()
        ws['D1'] = page_number

        loop_excel = loop_excel + 1
    wb.save('20210209.xlsx')
    wb.close()
    print(f'page {page_number} excel recorded!')
sampan0423
  • 89
  • 6
  • It looks like variable `get_titles` might be None. What does `soup.find_all('li', class_='chapter')` return? – OpenCoderX Feb 20 '21 at 06:17
  • Please post your code as one block that demostrastes the problem. We don't want to put all that together in our heads. – Klaus D. Feb 20 '21 at 06:19
  • @KlausD. I have attached my code. I can not find usage from your suggested post/answers. – sampan0423 Feb 20 '21 at 06:55
  • @OpenCoderX, definitely Yes, but I can not catch the bug. – sampan0423 Feb 20 '21 at 06:56
  • 1
    @sampan0423 try: `soup.find_all("li", {"class": "chapter"})` or try `find_all` without the class arg – OpenCoderX Feb 21 '21 at 03:49
  • @OpenCoderX hi, thanks for your reply, I've figure it out, the error message is kind of misleading. https://stackoverflow.com/questions/66298356/beautifulsoup-tag-element-contents-strip-method-throw-typeerror-nonetype-is – sampan0423 Feb 21 '21 at 07:14

0 Answers0