3

I would like to ask about the way to change the embedding size of the trained model.

I have a trained model models/BERT-pretrain-1-step-5000.pkl. Now I am adding a new token [TRA]to the tokeniser and try to use the resize_token_embeddings to the pertained one.

from pytorch_pretrained_bert_inset import BertModel #BertTokenizer 
from transformers import AutoTokenizer
from torch.nn.utils.rnn import pad_sequence
import tqdm

tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model_bert = BertModel.from_pretrained('bert-base-uncased', state_dict=torch.load('models/BERT-pretrain-1-step-5000.pkl', map_location=torch.device('cpu')))

#print(tokenizer.all_special_tokens) #--> ['[UNK]', '[SEP]', '[PAD]', '[CLS]', '[MASK]']
#print(tokenizer.all_special_ids)    #--> [100, 102, 0, 101, 103]

num_added_toks = tokenizer.add_tokens(['[TRA]'], special_tokens=True)
model_bert.resize_token_embeddings(len(tokenizer))  # --> Embedding(30523, 768)
print('[TRA] token id: ', tokenizer.convert_tokens_to_ids('[TRA]'))  # --> 30522

But I encountered the error:

AttributeError: 'BertModel' object has no attribute 'resize_token_embeddings'

I assume that it is because the model_bert(BERT-pretrain-1-step-5000.pkl) I had has the different embedding size. I would like to know if there is any way to fit the embedding size of my modified tokeniser and the model I would like to use as the initial weights.

Thanks a lot!!

cronoik
  • 15,434
  • 3
  • 40
  • 78
tw0930
  • 61
  • 1
  • 5

1 Answers1

2

resize_token_embeddings is a huggingface transformer method. You are using the BERTModel class from pytorch_pretrained_bert_inset which does not provide such a method. Looking at the code, it seems like they have copied the BERT code from huggingface some time ago.

You can either wait for an update from INSET (maybe create a github issue) or write your own code to extend the word_embedding layer:

from torch import nn 

embedding_layer = model.embeddings.word_embeddings

old_num_tokens, old_embedding_dim = embedding_layer.weight.shape

num_new_tokens = 1

# Creating new embedding layer with more entries
new_embeddings = nn.Embedding(
        old_num_tokens + num_new_tokens, old_embedding_dim
)

# Setting device and type accordingly
new_embeddings.to(
    embedding_layer.weight.device,
    dtype=embedding_layer.weight.dtype,
)

# Copying the old entries
new_embeddings.weight.data[:old_num_tokens, :] = embedding_layer.weight.data[
    :old_num_tokens, :
]

model.embeddings.word_embeddings = new_embeddings
cronoik
  • 15,434
  • 3
  • 40
  • 78