Yes, that is possible. Just feed the ids of the words to the word embedding layer:
from transformers import T5TokenizerFast, T5EncoderModel
tokenizer = T5TokenizerFast.from_pretrained("t5-small")
model = T5EncoderModel.from_pretrained("t5-small")
i = tokenizer(
"This is a meaningless test sentence to show how you can get word embeddings", return_tensors="pt", return_attention_mask=False, add_special_tokens=False
)
o = model.encoder.embed_tokens(i.input_ids)
The output tensor has the following shape:
#print(o.shape)
torch.Size([1, 19, 512])
The 19 vectors are the representations of each token. Depending on your task, you can map them back to the individual words with word_ids:
i.word_ids()
Output:
[0, 1, 2, 2, 3, 3, 3, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 12, 12, 12]