I'm familiar with python's nltk.metrics.distance
module, which is commonly used to compute edit distance of two string.
I am interested in a function which computes such distance but not char-wise as normally but token-wise. By that I mean that you can replace/add/delete whole tokens only (instead of chars).
Example of regular edit distance and my desired tokenized version:
> char_dist("aa bbbb cc",
"aa b cc")
3 # add 'b' character three-times
> token_dist("aa bbbb cc",
"aa b cc")
1 # replace 'bbbb' token with 'b' token
Is there already some function, that can compute token_dist
in python? I'd rather use something already implemented and tested than writing my own piece of code. Thanks for tips.