I have a simple transformers script looking like this.
from simpletransformers.seq2seq import Seq2SeqModel, Seq2SeqArgs
args = Seq2SeqArgs()
args.num_train_epoch=5
model = Seq2SeqModel(
"roberta",
"roberta-base",
"bert-base-cased",
)
import pandas as pd
df = pd.read_csv('english-french.csv')
df['input_text'] = df['english'].values
df['target_text'] =df['french'].values
model.train_model(df.head(1000))
print(model.eval_model(df.tail(10)))
The eval_loss is {'eval_loss': 0.0001931049264385365}
However when I run my prediction script
to_predict = ["They went to the public swimming pool."]
predictions=model.predict(to_predict)
I get this
['']
The dataset I used is here
I'm very confused on the output. Any help or explanation why it returns nothing would be much appreciated.