0

This is the code I am trying but the code is generating an error.

import nltk
from nltk.corpus import stopwords 
from nltk.tokenize import word_tokenize, sent_tokenize 
stop_words = set(stopwords.words('english')) 

file_content = open("Dictionary.txt").read()
tokens = nltk.word_tokenize(file_content)

# sent_tokenize is one of instances of 
# PunktSentenceTokenizer from the nltk.tokenize.punkt module 

tokenized = sent_tokenize(tokens) 
for i in tokenized: 
    
    # Word tokenizers is used to find the words 
    # and punctuation in a string 
    wordsList = nltk.word_tokenize(i) 

    # removing stop words from wordList 
    wordsList = [w for w in wordsList if not w in stop_words] 

    # Using a Tagger. Which is part-of-speech 
    # tagger or POS-tagger. 
    tagged = nltk.pos_tag(wordsList) 

    print(tagged) 

Error:

Traceback (most recent call last): File "tag.py", line 12, in tokenized = sent_tokenize(tokens) File "/home/mahadev/anaconda3/lib/python3.7/site-packages/nltk/tokenize/init.py",

line 105, in sent_tokenize return tokenizer.tokenize(text) File "/home/mahadev/anaconda3/lib/python3.7/site-packages/nltk/tokenize/punkt.py",

line 1269, in tokenize return list(self.sentences_from_text(text, realign_boundaries)) File "/home/mahadev/anaconda3/lib/python3.7/site-packages/nltk/tokenize/punkt.py",

line 1323, in sentences_from_text return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)] File "/home/mahadev/anaconda3/lib/python3.7/site-packages/nltk/tokenize/punkt.py",

line 1323, in return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)] File "/home/mahadev/anaconda3/lib/python3.7/site-packages/nltk/tokenize/punkt.py",

line 1313, in span_tokenize for sl in slices: File "/home/mahadev/anaconda3/lib/python3.7/site-packages/nltk/tokenize/punkt.py",

line 1354, in _realign_boundaries for sl1, sl2 in _pair_iter(slices): File "/home/mahadev/anaconda3/lib/python3.7/site-packages/nltk/tokenize/punkt.py",

line 317, in _pair_iter prev = next(it) File "/home/mahadev/anaconda3/lib/python3.7/site-packages/nltk/tokenize/punkt.py",

line 1327, in _slices_from_text for match in self._lang_vars.period_context_re().finditer(text): TypeError: expected string or bytes-like object

kunif
  • 4,060
  • 2
  • 10
  • 30

1 Answers1

0

No idea what your code is supposed to do but the error you are getting is caused by the data type for your tokens variable. It wants strings but it's getting a list of a different data type.

You should change that line to:

tokens = str(nltk.word_tokenize(file_content))
JamesR
  • 613
  • 8
  • 15