2

I have a simple transformers script looking like this.

from simpletransformers.seq2seq import Seq2SeqModel, Seq2SeqArgs
args = Seq2SeqArgs()
args.num_train_epoch=5
model = Seq2SeqModel(
    "roberta",
    "roberta-base",
    "bert-base-cased",
)
import pandas as pd
df = pd.read_csv('english-french.csv')
df['input_text'] = df['english'].values
df['target_text'] =df['french'].values
model.train_model(df.head(1000))
print(model.eval_model(df.tail(10)))

The eval_loss is {'eval_loss': 0.0001931049264385365}

However when I run my prediction script

to_predict = ["They went to the public swimming pool."]
predictions=model.predict(to_predict)

I get this

['']

The dataset I used is here

I'm very confused on the output. Any help or explanation why it returns nothing would be much appreciated.

DevDog
  • 111
  • 2
  • 9

1 Answers1

0

Use this model instead.

model = Seq2SeqModel(
    encoder_decoder_type="marian",
    encoder_decoder_name="Helsinki-NLP/opus-mt-en-mul",
    args=args,
    use_cuda=True,
)

roBERTa is not a good option for your task.

I have rewritten your code on this colab notebook

Results

# Input
to_predict = ["They went to the public swimming pool.", "she was driving the shiny black car."]
predictions = model.predict(to_predict)
print(predictions)

# Output
['Ils aient cher à la piscine publice.', 'elle conduit la véricine noir glancer.']
Shaida Muhammad
  • 1,428
  • 14
  • 25