9

I am using their default POS tagging and default tokenization..and it seems sufficient. I'd like their default chunker too.

I am reading the NLTK toolkit book, but it does not seem like they have a default chunker?

TIMEX
  • 259,804
  • 351
  • 777
  • 1,080

2 Answers2

9

You can get out of the box named entity chunking with the nltk.ne_chunk() method. It takes a list of POS tagged tuples:

nltk.ne_chunk([('Barack', 'NNP'), ('Obama', 'NNP'), ('lives', 'NNS'), ('in', 'IN'), ('Washington', 'NNP')])

results in:

Tree('S', [Tree('PERSON', [('Barack', 'NNP')]), Tree('ORGANIZATION', [('Obama', 'NNP')]), ('lives', 'NNS'), ('in', 'IN'), Tree('GPE', [('Washington', 'NNP')])])

It identifies Barack as a person, but Obama as an organization. So, not perfect.

ealdent
  • 3,677
  • 1
  • 25
  • 26
  • 1
    What if I am not very concerned about named_entities, but chunking in general. For example, "the yellow dog" is a chunk, and "is running" is a chunk. – TIMEX Nov 06 '09 at 20:35
  • Yeah for that, there's no default to my knowledge (though I don't know everything about nltk, to be sure). You could use a RegexpChunkParser, though you'll have to develop the rules yourself. There's an example here: http://gnosis.cx/publish/programming/charming_python_b18.txt – ealdent Nov 07 '09 at 03:01
8

I couldn't find a default chunker/shallow parser either. Although the book describes how to build and train one with example features. Coming up with additional features to get good performance shouldn't be too difficult.

See Chapter 7's section on Training Classifier-based Chunkers.