I'm finetuning QA models from hugging face pretrained models using huggingface Trainer, during the training process, the validation loss doesn't show. My compute_metrices function returns accuracy and f1 score, which doesn't show in the log as well.
here is my code for trainer set up:
args = TrainingArguments(
output_dir="./result_albert_nontracking",
evaluation_strategy="steps",
save_strategy="epoch",
max_steps=10000,
do_train=True,
do_eval=True,
warmup_steps=500,
num_train_epochs=3,
weight_decay=0.01,
learning_rate=5e-5,
logging_dir='./logs',
logging_steps=500,
eval_steps=500,
trainer = Trainer(
model=model,
args=args,
train_dataset=train_dataset,
eval_dataset=validation_dataset,
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics
)
trainer.train()
also have setup logger by:
logger = logging.get_logger(__name__)
logger.setLevel(logging.DEBUG)
here is what I got:
I've tried different checking point ('bert-base-cased','albert-base-v2','roberta-base'), and got the same 'no log'. Any one know what is the problem with that? Thanks in advance!