I'm fairly new to python, please bear with me.
I am trying to create a code that identifies the difficult words in a Dutch sentence and lists them. For this, I need to know if the Dutch words in the input have any hypernyms.
When I download Open Multilingual Wordnet, it cannot be imported through from nltk.corpus import omw as omw
, not sure why. Should I use nltk."some other module?"
?
I have tried using regular Wordnet instead, and applying lang='nld'
in different parts of the code, but this does not work.
Maybe something else is incorrect? Any help is appreciated.
import nltk
nltk.download('punkt')
nltk.download('wordnet')
nltk.download('omw')
from nltk.corpus import wordnet as wn
#from nltk.corpus import omw as omw
input1 = input("Input difficult text: ").lower()
words = nltk.word_tokenize(input1)
word_list = [i
for i in words
if i.isalnum()
]
#find hypernyms for words and append in list
l = []
for x in word_list:
hyper = (wn.synset(x).hypernyms(lang='nld'))
l.append(hyper[0].hypernyms() if len(hyper)> 0 else '')
The error message is as follows on colab:
(I think this is because I moved lang='nld'
to hypernyms()
, instead of wn.synset(x, lang='nld')
, though. I.e. the value error is not the main problem.)
ValueError Traceback (most recent call last)
<ipython-input-29-92fedaeb123b> in <module>()
43 l = []
44 for x in word_list:
---> 45 hyper = (wn.synset(x).hypernyms(lang='nld'))
46 l.append(hyper[0].hypernyms() if len(hyper)> 0 else '')
47
/usr/local/lib/python3.7/dist-packages/nltk/corpus/reader/wordnet.py in synset(self, name)
1288 def synset(self, name):
1289 # split name into lemma, part of speech and synset number
-> 1290 lemma, pos, synset_index_str = name.lower().rsplit('.', 2)
1291 synset_index = int(synset_index_str) - 1
1292
ValueError: not enough values to unpack (expected 3, got 1)```