1

If I split a sentence with nltk.tokenize.word_tokenize() then rejoin with ' '.join() it won't be exactly like the original because words with punctuation inside them get split into separate tokens.

How can I programmatically rejoin like it was before?

from nltk import word_tokenize

sentence = "Story: I wish my dog's hair was fluffier, and he ate better"
print(sentence)
=> Story: I wish my dog's hair was fluffier, and he ate better

tokens = word_tokenize(sentence)
print(tokens)
=> ['Story', ':', 'I', 'wish', 'my', 'dog', "'s", 'hair', 'was', 'fluffier', ',', 'and', 'he', 'ate', 'better']

sentence = ' '.join(tokens)
print(sentence)
=> Story : I wish my dog 's hair was fluffier , and he ate better

Note the : and 's are different than the original.

tim_xyz
  • 11,573
  • 17
  • 52
  • 97

2 Answers2

2

From this answer. You can use MosesDetokenizer as your solution.

Just remember download the sub package of nltk first: nltk.download('perluniprops')

>>>import nltk
>>>sentence = "Story: I wish my dog's hair was fluffier, and he ate better"
>>>tokens = nltk.word_tokenize(sentence)
>>>tokens
['Story', ':', 'I', 'wish', 'my', 'dog', "'s", 'hair', 'was', 'fluffier', ',', 'and', 'he', 'ate', 'better']
>>>from nltk.tokenize.moses import MosesDetokenizer
>>>detokens = MosesDetokenizer().detokenize(tokens, return_str=True)
>>>detokens
"Story: I wish my dog's hair was fluffier, and he ate better"
Lê Tư Thành
  • 1,063
  • 2
  • 10
  • 19
0

After joining u can use replace function

 sentence.replace(" '","'").replace(" : ",': ')
 #o/p 
 Story: I wish my dog's hair was fluffier , and he ate better
qaiser
  • 2,770
  • 2
  • 17
  • 29