Is there a way to know the mapping from the tokens back to the original words in the tokenizer.decode()
function?
For example:
from transformers.tokenization_roberta import RobertaTokenizer
tokenizer = RobertaTokenizer.from_pretrained('roberta-large', do_lower_case=True)
str = "This is a tokenization example"
tokenized = tokenizer.tokenize(str)
## ['this', 'Ġis', 'Ġa', 'Ġtoken', 'ization', 'Ġexample']
encoded = tokenizer.encode_plus(str)
## encoded['input_ids']=[0, 42, 16, 10, 19233, 1938, 1246, 2]
decoded = tokenizer.decode(encoded['input_ids'])
## '<s> this is a tokenization example</s>'
And the objective is to have a function that maps each token in the decode
process to the correct input word, for here it will be:
desired_output = [[1],[2],[3],[4,5],[6]]
As this
corresponds to id 42
, while token
and ization
corresponds to ids [19244,1938]
which are at indexes 4,5
of the input_ids
array.