2

First i create tokenizer as follow

from tokenizers import Tokenizer
from tokenizers.models import BPE,WordPiece
tokenizer = Tokenizer(WordPiece(unk_token="[UNK]"))

from tokenizers.trainers import BpeTrainer,WordPieceTrainer
trainer = WordPieceTrainer(vocab_size=5000,min_frequency=3,
                     special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])

from tokenizers.pre_tokenizers import Whitespace,WhitespaceSplit
tokenizer.pre_tokenizer = WhitespaceSplit()
tokenizer.train(files, trainer)

from tokenizers.processors import TemplateProcessing
tokenizer.token_to_id("[SEP]"),tokenizer.token_to_id("[CLS]")
tokenizer.post_processor = TemplateProcessing(
    single="[CLS] $A [SEP]",
    pair="[CLS] $A [SEP] $B:1 [SEP]:1",
    special_tokens=[
        ("[CLS]", tokenizer.token_to_id("[CLS]")),
        ("[SEP]", tokenizer.token_to_id("[SEP]")),
    ],
)

Next, I want to train BERT model on these tokens. I tried as follow

from transformers import DataCollatorForLanguageModeling
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer,mlm=True, mlm_probability=0.15)

But it gives me an error AttributeError: 'tokenizers.Tokenizer' object has no attribute 'mask_token' "This tokenizer does not have a mask token which is necessary for masked language modeling. " Though I have attention_mask. Is is different than mask token

Talha Anwar
  • 2,699
  • 4
  • 23
  • 62
  • HuggingFace provides a good tutorial on how to train a model from scratch, you can check it on Google Colab here: https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb – Bill Jun 16 '22 at 15:34

0 Answers0