I'm trying to fine tune a T5 model with C4-200m datasets, when i run the trainer it always stuck at 10%(the 500th step), is it the problem of my GPU or my arguments settings? I am working with wandb to generate my metric.
here's the arguments setting:
batch_size = 16 # how much time it takes to train 1 batch
training_args = Seq2SeqTrainingArguments(output_dir="/weights_t5",
evaluation_strategy="steps", #steps can be easily controlled by eval_steps, epoch takes too long
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
learning_rate=2e-5, #default is using AdamW
num_train_epochs=1,
weight_decay=0.01,
save_total_limit=2,
predict_with_generate=True,
gradient_accumulation_steps = 6,
eval_steps = 500,
save_steps = 500,
fp16=True,
load_best_model_at_end=True,
logging_dir="/logs",
report_to="wandb")
and here is my metric compute function which is directly at huggingface (https://huggingface.co/course/chapter7/5#metrics-for-text-summarization)
def compute_metrics(eval_pred):
predictions, labels = eval_pred
decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)
# Replace -100 in the labels as we can't decode them.
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Rouge expects a newline after each sentence
decoded_preds = ["\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds]
decoded_labels = ["\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels]
result = rouge_metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
# Extract a few results
result = {key: value * 100 for key, value in result.items()}
# Add mean generated length
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions]
result["gen_len"] = np.mean(prediction_lens)
return {k: round(v, 4) for k, v in result.items()}
I tries to set the evalue steps large and smaller but it seems iit is some kind of setting errors? Or Wandb is giving up on it?