25

I would like to have all the text visible from a website, after the HTML is rendered. I'm working in Python with Scrapy framework. With xpath('//body//text()') I'm able to get it, but with the HTML tags, and I only want the text. Any solution for this?

the
  • 21,007
  • 11
  • 68
  • 101
tomasyany
  • 1,132
  • 3
  • 15
  • 32

3 Answers3

47

The easiest option would be to extract //body//text() and join everything found:

''.join(sel.select("//body//text()").extract()).strip()

where sel is a Selector instance.

Another option is to use nltk's clean_html():

>>> import nltk
>>> html = """
... <div class="post-text" itemprop="description">
... 
...         <p>I would like to have all the text visible from a website, after the HTML is rendered. I'm working in Python with Scrapy framework.
... With <code>xpath('//body//text()')</code> I'm able to get it, but with the HTML tags, and I only want the text. Any solution for this? Thanks !</p>
... 
...     </div>"""
>>> nltk.clean_html(html)
"I would like to have all the text visible from a website, after the HTML is rendered. I'm working in Python with Scrapy framework.\nWith xpath('//body//text()') I'm able to get it, but with the HTML tags, and I only want the text. Any solution for this? Thanks !"

Another option is to use BeautifulSoup's get_text():

get_text()

If you only want the text part of a document or tag, you can use the get_text() method. It returns all the text in a document or beneath a tag, as a single Unicode string.

>>> from bs4 import BeautifulSoup
>>> soup = BeautifulSoup(html)
>>> print soup.get_text().strip()
I would like to have all the text visible from a website, after the HTML is rendered. I'm working in Python with Scrapy framework.
With xpath('//body//text()') I'm able to get it, but with the HTML tags, and I only want the text. Any solution for this? Thanks !

Another option is to use lxml.html's text_content():

.text_content()

Returns the text content of the element, including the text content of its children, with no markup.

>>> import lxml.html
>>> tree = lxml.html.fromstring(html)
>>> print tree.text_content().strip()
I would like to have all the text visible from a website, after the HTML is rendered. I'm working in Python with Scrapy framework.
With xpath('//body//text()') I'm able to get it, but with the HTML tags, and I only want the text. Any solution for this? Thanks !
alecxe
  • 462,703
  • 120
  • 1,088
  • 1,195
  • I have deleted my question.. I have used the below code html = sel.select("//body//text()") tree = lxml.html.fromstring(html) item['description'] = tree.text_content().strip() But i am getting the is_full_html = _looks_like_full_html_unicode(html) exceptions.TypeError: expected string or buffer ..erro. What went wrong – backtrack Jan 28 '15 at 15:53
  • 4
    Just as an update, `nltk` deprecated their `clean_html` method and instead recommend: `NotImplementedError: To remove HTML markup, use BeautifulSoup's get_text() function ` – TheNastyOne Dec 17 '17 at 05:36
4

Have you tried?

xpath('//body//text()').re('(\w+)')

OR

 xpath('//body//text()').extract()
Pedro Lobito
  • 94,083
  • 31
  • 258
  • 268
1

The xpath('//body//text()') doesn't always drive dipper into the nodes in your last used tag(in your case body.) If you type xpath('//body/node()/text()').extract() you will see the nodes which are in you html body. You can try xpath('//body/descendant::text()').

Lovepreet Singh
  • 4,792
  • 1
  • 18
  • 36