Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

resize_token_embeddings on the a pertrained model with different embedding size

I would like to ask about the way to change the embedding size of the trained model.

I have a trained model models/BERT-pretrain-1-step-5000.pkl. Now I am adding a new token [TRA]to the tokeniser and try to use the resize_token_embeddings to the pertained one.

from pytorch_pretrained_bert_inset import BertModel #BertTokenizer 
from transformers import AutoTokenizer
from torch.nn.utils.rnn import pad_sequence
import tqdm

tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model_bert = BertModel.from_pretrained('bert-base-uncased', state_dict=torch.load('models/BERT-pretrain-1-step-5000.pkl', map_location=torch.device('cpu')))

#print(tokenizer.all_special_tokens) #--> ['[UNK]', '[SEP]', '[PAD]', '[CLS]', '[MASK]']
#print(tokenizer.all_special_ids)    #--> [100, 102, 0, 101, 103]

num_added_toks = tokenizer.add_tokens(['[TRA]'], special_tokens=True)
model_bert.resize_token_embeddings(len(tokenizer))  # --> Embedding(30523, 768)
print('[TRA] token id: ', tokenizer.convert_tokens_to_ids('[TRA]'))  # --> 30522

But I encountered the error:

AttributeError: 'BertModel' object has no attribute 'resize_token_embeddings'

I assume that it is because the model_bert(BERT-pretrain-1-step-5000.pkl) I had has the different embedding size. I would like to know if there is any way to fit the embedding size of my modified tokeniser and the model I would like to use as the initial weights.

Thanks a lot!!

like image 454
tw0930 Avatar asked Dec 02 '25 17:12

tw0930


1 Answers

resize_token_embeddings is a huggingface transformer method. You are using the BERTModel class from pytorch_pretrained_bert_inset which does not provide such a method. Looking at the code, it seems like they have copied the BERT code from huggingface some time ago.

You can either wait for an update from INSET (maybe create a github issue) or write your own code to extend the word_embedding layer:

from torch import nn 

embedding_layer = model.embeddings.word_embeddings

old_num_tokens, old_embedding_dim = embedding_layer.weight.shape

num_new_tokens = 1

# Creating new embedding layer with more entries
new_embeddings = nn.Embedding(
        old_num_tokens + num_new_tokens, old_embedding_dim
)

# Setting device and type accordingly
new_embeddings.to(
    embedding_layer.weight.device,
    dtype=embedding_layer.weight.dtype,
)

# Copying the old entries
new_embeddings.weight.data[:old_num_tokens, :] = embedding_layer.weight.data[
    :old_num_tokens, :
]

model.embeddings.word_embeddings = new_embeddings
like image 141
cronoik Avatar answered Dec 05 '25 21:12

cronoik



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!