I want to use perl modules from WordNet::Similarity package to calculate semantic relatedness(Hirst-St Onge, Lexical chains, etc.) between texts. Does anybody have idea about how to use them in python?
Asked
Active
Viewed 422 times
2 Answers
0
NLTK
is the Python
interface with WordNet
.
Check out section 5 on this page for similarity. You will also have to install the module, the instructions for that are here.

NDevox
- 4,056
- 4
- 21
- 36
-
It doesn't have measures for semantic relatedness. It deals with semantic similarity only. – Spy May 28 '15 at 11:12
-
Do you mean Levenshtein distance calculations? – chaos May 29 '15 at 06:25
-
No, I mean Hirst St onge, context vector, extended lesk etc. These are available in WordNet::Similarity(https://metacpan.org/release/WordNet-Similarity) package in Perl Language. I want to import these modules in Python. May be through some wrapper? – Spy May 29 '15 at 06:50
-
If these are not available you can try to run them from python using subprocess. Something like: `import subprocess` `pipe = subprocess.Popen(["perl", "/somepath/perl_script.pl"],stdout=subprocess.PIPE)` – chaos May 29 '15 at 08:46
0
You need to use nltk.wordnet.corpus. You should have a synset method available that will return what you need. Basically the wordnet corpus from nltk offers: words, synsets, lemmas, verb frames, similarity etc. You can check the extensive howto for nltk for code and approach samples.

chaos
- 480
- 3
- 8