I am trying to use a tokenizer from huggingface. However, I do not have the vocab.
from tokenizers import BertWordPieceTokenizer , CharBPETokenizer, ByteLevelBPETokenizer
from tokenizers import Tokenizer
text = 'the quick brown fox jumped over the lazy dog !!!'
tokenizer = CharBPETokenizer()
print(tokenizer)
#Tokenizer(vocabulary_size=0, model=BPE, unk_token=<unk>, suffix=</w>, dropout=None, #lowercase=False, unicode_normalizer=None, bert_normalizer=True, #split_on_whitespace_only=False)
tokenizer = Tokenizer(BPE())
out = tokenizer.encode(text)
out.tokens
Out[33]: []
According to https://github.com/huggingface/tokenizers/blob/main/bindings/python/py_src/tokenizers/implementations/char_level_bpe.py , without vocab this should just use Tokenizer(BPE()) .
I think it might be a lack of vocab issue. Can someone point me where to get default vocab for BertWordPieceTokenizer , CharBPETokenizer, ByteLevelBPETokenizer , SentencePieceUnigramTokenizer and BaseTokenizer.