1

I have fine-tuned a machine translation model and I'm trying to load my pytorch_model.bin model checkpoint that was saved during training and predict the translation of a word. How do I convert from transformers.modeling_outputs.Seq2SeqModelOutput to normal text?

model = AutoModel.from_pretrained('/content/drive/MyDrive/model', cache_dir=None)

 tokenizer = AutoTokenizer.from_pretrained('/content/drive/MyDrive/model', cache_dir=None)

model.eval()

inputs2 = tokenizer('word', return_tensors="pt")["input_ids"]

inputs2.data #results in =>
tensor([[ 1415,   259, 54622,     1]])

outputs = model(input_ids=inputs2, decoder_input_ids=inputs2)
type(outputs) #results in =>transformers.modeling_outputs.Seq2SeqModelOutput

**output_str = #translation of the input word**
FlippyFloppy
  • 11
  • 1
  • 4
  • 1
    whats the model name ? – kiranr Mar 22 '21 at 19:17
  • MT5ForConditionalGeneration – FlippyFloppy Mar 23 '21 at 18:08
  • 3
    use `tokens= model.generate(input_ids)` to get the [output](https://huggingface.co/transformers/main_classes/model.html?highlight=generate#transformers.generation_utils.GenerationMixin.generate) tokens then decode those tokens to text using `output = tokenizer.decode(tokens.squeeze(), skip_special_tokens=True)` . – kiranr Mar 23 '21 at 18:38
  • 1
    @WaveShaper Please submit this as an answer. – cronoik Mar 23 '21 at 20:38

0 Answers0