0

I need to get the embeddings from a pre-trained LLM. As of now I am doing something like this:

def gen_embeddings(self,code):

    tokenized_input_pos = self.tokenizer(code, return_tensors="pt", padding=True, truncation=True)
    with torch.no_grad():
        output = self.model(**tokenized_input_pos)
    embedding = output.last_hidden_state.mean(dim=1).squeeze().tolist()
    if len(code)==1:
        return [embedding]
    else:
        return embedding

As you can see, I am taking the mean of weights from the last hidden state. But this approach is taking a lot of time. I was hoping instead of taking the mean from the last hidden state, if it's possible to get it from the first 4 layers? I know it might affect my model's performance, but for now I am doing a POC kind of thing, so speed is of essence.

desertnaut
  • 57,590
  • 26
  • 140
  • 166
not-a-bot
  • 25
  • 3

1 Answers1

-1
def gen_embeddings(self, code):
    tokenized_input_pos = self.tokenizer(code, return_tensors="pt", padding=True, truncation=True)
    with torch.no_grad():
        outputs = self.model(**tokenized_input_pos, output_hidden_states=True)
    
    hidden_states = outputs.hidden_states[:4]  # Extract the hidden states from the first 4 layers
    embeddings = torch.cat(hidden_states, dim=-1)
    embeddings = embeddings.mean(dim=1).squeeze().tolist()
    
    if len(code) == 1:
        return [embeddings]
    else:
        return embeddings
desertnaut
  • 57,590
  • 26
  • 140
  • 166