I know there are severeal ways to detect proper nouns and chunk them with tools. But after that the output is array full of the chunked words.
How can i rewrite the sentence with chunked proper nouns.
Example :
John Rose Center is very beautiful place and i want to go there with
Barbara Palvin. Also there are stores like Adidas ,Nike , Reebok.
if i use stanford parser ( http://nlp.stanford.edu:8080/parser/index.jsp
), the out put will be:
John/NNP Rose/NNP Center/NNP is/VBZ very/RB beautiful/JJ place/NN and/CC i/FW want/VBP to/TO go/VB there/RB with/IN Barbara/NNP Palvin/NNP ./.
Also/RB there/EX are/VBP stores/NNS like/IN Adidas/NNP ,/, Nike/NNP ,/, Reebok/NNP ./.
How can i rewrite the sentence like this: Assume that we created a array with tokenized sentence and chunked proper nouns which counts as one word:
for i in arr:
print arr[i]
['John Rose Center']
['is']
['very']
['beautiful']
.
.
['Barbara Palvin']
['Also']
['there']
.
.
['like']
['Adidas']
['Nike']
['Reebok']
"Also" or other words like this won't be a problem for me just tried many times.And still confused what should i do to append chunked proper names in to my new sentence.I searched all the questions so have mercy for me i am new at both python and nltk.Sorry for bad english.
There is no limitations like "i must use only stanford parser".Feel free to use every method (even regexr) which will solve my problem will be very useful for me!