If I split a sentence with nltk.tokenize.word_tokenize()
then rejoin with ' '.join()
it won't be exactly like the original because words with punctuation inside them get split into separate tokens.
How can I programmatically rejoin like it was before?
from nltk import word_tokenize
sentence = "Story: I wish my dog's hair was fluffier, and he ate better"
print(sentence)
=> Story: I wish my dog's hair was fluffier, and he ate better
tokens = word_tokenize(sentence)
print(tokens)
=> ['Story', ':', 'I', 'wish', 'my', 'dog', "'s", 'hair', 'was', 'fluffier', ',', 'and', 'he', 'ate', 'better']
sentence = ' '.join(tokens)
print(sentence)
=> Story : I wish my dog 's hair was fluffier , and he ate better
Note the :
and 's
are different than the original.