3

Resuming the GPT2 finetuning, implemented from run_clm.py

Does GPT2 huggingface has a parameter to resume the training from the saved checkpoint, instead training again from the beginning? Suppose the python notebook crashes while training, the checkpoints will be saved, but when I train the model again still it starts the training from the beginning.

Source: here

finetuning code:

!python3 run_clm.py \
    --train_file source.txt \
    --do_train \
    --output_dir gpt-finetuned \
    --overwrite_output_dir \
    --per_device_train_batch_size 2 \
    --model_name_or_path=gpt2 \
    --save_steps 100 \
    --num_train_epochs=1 \
    --block_size=200 \
    --tokenizer_name=gpt2

From the above code, run_clm.py is a script provided by huggingface to finetune gpt2 to train with the customized dataset

Woody
  • 930
  • 9
  • 23

2 Answers2

4

To resume training from checkpoint you use the --model_name_or_path parameter. So instead of giving the default gpt2 you direct this to your latest checkpoint folder.

So your command becomes:

!python3 run_clm.py \
    --train_file source.txt \
    --do_train \
    --output_dir gpt-finetuned \
    --overwrite_output_dir \
    --per_device_train_batch_size 2 \
    --model_name_or_path=/content/models/checkpoint-5000 \
    --save_steps 100 \
    --num_train_epochs=1 \
    --block_size=200 \
    --tokenizer_name=gpt2
StuckInPhDNoMore
  • 2,507
  • 4
  • 41
  • 73
1

In newer version of transformer you don't need to provide model_name_or_path anymore check out here. for this you should remove --overwrite_output_dir and --model_name_or_path then the last checkpoint in the output_dir would be loaded and the training continues from that checkpoint.

NOTE1:you should give --tokenizer_name because in this way you haven't provide model_name_or_path so the trainer doesn't know how to load tokenizer.

NOTE2: put num_train_epochs as big as meaningful for you because as long as you can re-train with the same command from the last checkpoint no need to define --num_train_epochs=1 and no need to know what checkpoint the process is, like checkpoint-5000 here.

So you can ruin this command instead:

!python3 run_clm.py \
--train_file source.txt \
--do_train \
--output_dir gpt-finetuned \
--per_device_train_batch_size 2 \
--save_steps 100 \
--num_train_epochs=10 \
--block_size=200 \
--tokenizer_name=gpt2
Hamid
  • 221
  • 2
  • 5