1

I'm running run_clm.py to fine-tune gpt-2 form the huggingface library, following the language_modeling example:

!python run_clm.py \
    --model_name_or_path gpt2 \
    --train_file train.txt \
    --validation_file test.txt \
    --do_train \
    --do_eval \
    --output_dir /tmp/test-clm

This is the output, the process seemed to be started but there was the ^C appeared to stop the process:

The following columns in the training set don't have a corresponding argument in `GPT2LMHeadModel.forward` and have been ignored: .
The following columns in the evaluation set don't have a corresponding argument in `GPT2LMHeadModel.forward` and have been ignored: .
***** Running training *****
  Num examples = 2318
  Num Epochs = 3
  Instantaneous batch size per device = 8
  Total train batch size (w. parallel, distributed & accumulation) = 8
  Gradient Accumulation steps = 1
  Total optimization steps = 870
  0% 0/870 [00:00<?, ?it/s]^C

Here's my environment info:

  • transformers version: 3.4.0
  • Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
  • Python version: 3.6.9
  • Tensorflow version: 1.14
  • Using GPU in script?: yes

What would be the possible triggers of the early stopping?

Guy Coder
  • 24,501
  • 8
  • 71
  • 136
xiexieni9527
  • 111
  • 7

0 Answers0