0

I tried to load a model, which I found on HuggingFace: https://huggingface.co/deepset/gelectra-large-germanquad

The pipeline shows different (but correct results) then loading the model. What do I need to do, to load the excat model from the pipeline?

Pipeline:

from transformers import pipeline

qa_pipeline = pipeline(
    "question-answering",
    model="deepset/gelectra-base-germanquad",
    tokenizer="deepset/gelectra-base-germanquad"
)


qa_pipeline({
    "context": "Ein Geschäftsbericht kostet über 100.000 Euro",
    'question': "Wie teuer ist ein Geschäftsbericht?"
})
>>> out[0]: "über 100.000 Euro" 

Loading the model:

from transformers import ElectraTokenizer, ElectraForQuestionAnswering
import torch

tokenizer = ElectraTokenizer.from_pretrained("deepset/gelectra-base-germanquad")

model = ElectraForQuestionAnswering.from_pretrained("deepset/gelectra-base-germanquad")
question = "Wie teuer ist ein Geschäftsbericht?"
doc = "Ein Geschäftsbericht kostet über 100.000 Euro"
encoding = tokenizer(question, doc, add_special_tokens=True ,return_tensors="pt")

outputs = model(**encoding)
start = outputs.start_logits
end = outputs.end_logits
 
answer_tokens = all_tokens[torch.argmax(start) :torch.argmax(end)+1]
answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens))
print(answer)
>>> out[1]: ""
Oweys
  • 47
  • 6

0 Answers0