0

I'm running:

#original training script

trainer = transformers.Trainer(
    model=model,
    train_dataset=train_dataset,
    eval_dataset=test_dataset, #turn on the eval dataset for comparisons
    args=transformers.TrainingArguments(
        num_train_epochs=2,
        per_device_train_batch_size=1,
        gradient_accumulation_steps=1,
        warmup_ratio=0.05,
        max_steps=20,
        learning_rate=2e-4,
        fp16=True,
        logging_steps=1,
        output_dir="outputs",
        optim="paged_adamw_8bit",
        lr_scheduler_type='cosine',
    ),
    data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
)
model.config.use_cache = False  # silence the warnings. Please re-enable for inference!

I'm not 100% clear, but I think the loss shown is versus the training dataset versus the eval dataset... training steps and losses

How do I show losses versus eval (and training set too, ideally)?

I would have expected adding eval_dataset was enough...

0 Answers0