0

Iā€˜m playing with NLTK. I need NER but it's not fast with many sentences. Now my code is below:

from nltk.tag import StanfordNERTagger   
st = StanfordNERTagger(...)
for s in sents:
    w_tokens = word_tokenize(s.strip())
    ner_tags =st_ner.tag(w_tokens)

One sentence is pretty.

Input:

Barack H. Obama is the 44th President of the United States.

output:

[('Barack', 'PERSON'), ('H.', 'PERSON'), ('Obama', 'PERSON'), ('is', 'O'), ('the', 'O'), ('44th', 'O'), ('President', 'O'), ('of', 'O'), ('the', 'O'), ('United', 'LOCATION'), ('States', 'LOCATION')

But, I need handle many sentences. Do I have any method like chunk to make me finish the job faster?

Joe Zhow
  • 833
  • 7
  • 17
  • hi! Have you taken a look at these [1](https://stackoverflow.com/questions/33748554/how-to-speed-up-ne-recognition-with-stanford-ner-with-python-nltk), [2](https://stackoverflow.com/questions/33676526/pos-tagger-is-incredibly-slow), [3](https://stackoverflow.com/questions/33829160/why-is-pos-tag-so-painfully-slow-and-can-this-be-avoided) questions? – arturomp Jan 22 '18 at 19:56
  • I searched StanfordNERTagger but not found those, anyway, thanks a lot! – Joe Zhow Jan 23 '18 at 03:53

0 Answers0