I have fine-tuned a machine translation model and I'm trying to load my pytorch_model.bin model checkpoint that was saved during training and predict the translation of a word. How do I convert from transformers.modeling_outputs.Seq2SeqModelOutput to normal text?
model = AutoModel.from_pretrained('/content/drive/MyDrive/model', cache_dir=None)
tokenizer = AutoTokenizer.from_pretrained('/content/drive/MyDrive/model', cache_dir=None)
model.eval()
inputs2 = tokenizer('word', return_tensors="pt")["input_ids"]
inputs2.data #results in =>
tensor([[ 1415, 259, 54622, 1]])
outputs = model(input_ids=inputs2, decoder_input_ids=inputs2)
type(outputs) #results in =>transformers.modeling_outputs.Seq2SeqModelOutput
**output_str = #translation of the input word**