16

How to find out the summarized text for a given URL?

What do i mean by summarized text?

Merck $41.1 Billion Schering-Plough Bid Seeks Science

Link Descrption

Merck & Co.’s $41.1 billion purchase of Schering-Plough Corp. adds experimental drugs for blood clots, infections and schizophrenia and allows the companies to speed research on biotechnology drugs.

For the above URL the below three lines is the summary text.
A short 2 to 3 line description of the URL which we usually obtain by fetching that page , examining the content thereafter figuring out short description from that html markup.

Are there any good algorithm which does this? (or)
Are there any good libraries in python/django which does this?

Rama Vadakattu
  • 1,266
  • 2
  • 16
  • 24
  • possible duplicate of [summarize text or simplify text](http://stackoverflow.com/questions/5479333/summarize-text-or-simplify-text) – Mišo Jan 02 '14 at 21:23

4 Answers4

22

I had the same need, and lemur, although it has summarization capabilities, I found it buggy to the point of being unusable. Over the weekend I used nltk to code up a summarize module in python: https://github.com/thavelick/summarize

I took the algorithm from the Java library Classifier4J here: http://classifier4j.sourceforge.net/ but used nltk and a python wherever possible.

Here is the basic usage:

>>> import summarize

A SimpleSummarizer (currently the only summarizer) makes a summary by using sentences with the most frequent words:

>>> ss = summarize.SimpleSummarizer()
>>> input = "NLTK is a python library for working human-written text. Summarize is a package that uses NLTK to create summaries."
>>> ss.summarize(input, 1)
'NLTK is a python library for working human-written text.'

You can specify any number of sentenecs in the summary as you like.

>>> input = "NLTK is a python library for working human-written text. Summarize is a package that uses NLTK to create summaries. A Summariser is really cool. I don't think there are any other python summarisers."
>>> ss.summarize(input, 2)
"NLTK is a python library for working human-written text.  I don't think there are any other python summarisers."

Unlike the original algorithm from Classifier4J, this summarizer works correctly with punctuation other than periods:

>>> input = "NLTK is a python library for working human-written text! Summarize is a package that uses NLTK to create summaries."
>>> ss.summarize(input, 1)
'NLTK is a python library for working human-written text!'

UPDATE

I've now (finally!) released this under the Apache 2.0 license, the same license as nltk, and put the module up on github (see above). Any contributions or suggestions are welcome.

Tristan Havelick
  • 67,400
  • 20
  • 54
  • 64
  • @Trisan - My boss wanted to ask if you've thought about the licensing of this yet? I tried to find you on the site but I saw nothing. – Glycerine Jan 18 '11 at 09:47
  • Hi, a year later and I stumbled on this page. I agree licensing it would be great. I would note that the output is occasionally buggy...it repeated part of a sentence mid-summary. Summarizing [this long comment](http://ask.metafilter.com/182456/Work-in-a-tollbooth#2625891) into 5 sentences creates an output error when I run it. – Jordan Reiter Apr 08 '11 at 20:23
  • Is this still available somewhere? – snøreven Oct 22 '12 at 11:35
  • I just put this up on github, see the edited answer above – Tristan Havelick Oct 29 '12 at 15:46
4

Text summarization is a fairly complicated topic. If you have a need to do this in a serious way, you may wish to look at projects like Lemur (http://www.lemurproject.org/).

However, what I suspect you really want is a text abstract here. If you know what part of the document contains the body text, locate it using an HTML parsing library like BeautifulSoup, and then strip out the HTML; take the first sentence, or first N characters (which ever suits best), and use that. Sort of a poor cousin's abstract-generator :-)

Jarret Hardie
  • 95,172
  • 10
  • 132
  • 126
4

Check out the Natural Language Toolkit. Its a very useful python library if you're doing any text-processing.

Then look at this paper by HP Luhn (1958). It describes a naive but effective method of generating summaries of text.

Use the nltk.probability.FreqDist object to track how often words appear in text and then score sentences according to how many of the most frequent words appear in them. Then select the sentences with the best scores and voila, you have a summary of the document.

I suspect the NLTK should have a means of loading documents from the web and getting all of the HTML tags out of the way. I haven't done that kind of thing myself, but if you look up the corpus readers you might find something helpful.

theycallmemorty
  • 12,515
  • 14
  • 51
  • 71
-4

Your best bet in this case would be to use a HTML parsing library like BeautifulSoup (http://www.crummy.com/software/BeautifulSoup/)

From there, you can fetch for example, all the pages p tags:

import urllib2

from BeautifulSoup import BeautifulSoup

page = urllib2.urlopen("http://www.bloomberg.com/apps/newspid=20601103&sid=a8p0FQHnw.Yo&refer=us")

soup = BeautifulSoup(page)

soup.findAll('p')

And then, do some parsing around. It depends entirely on the page, as every site is structured differently. You can get lucky on some sites as they may do and you simply look for a p tag with the id#summary in it, while others (like Blooberg) might require a bit more playing around with.

Bartek
  • 614
  • 1
  • 5
  • 15