when I use:
modelname = 'deepset/bert-base-cased-squad2'
model = BertForQuestionAnswering.from_pretrained(modelname)
tokenizer = AutoTokenizer.from_pretrained(modelname)
nlp = pipeline('question-answering', model=model, tokenizer=tokenizer)
result = nlp({'question': question,'context': context})
it doesn't crash. However when i use encode_plus():
modelname = 'deepset/bert-base-cased-squad2'
model = BertForQuestionAnswering.from_pretrained(modelname)
tokenizer = AutoTokenizer.from_pretrained(modelname)
inputs= tokenizer.encode_plus(question,context,return_tensors='pt')
I have this error:
The size of tensor a (629) must match the size of tensor b (512) at non-singleton dimension 1
which I understand but why I don't have the same error in the first case? Can someone explain the difference?