Depends what you mean. What you're saving is your model, not the current state of the training function. So if you load the model it will include the things it learned, and if you then start the training it will start from the beginning optimizing this new model. So in a way the training will start from the beginning but it will continue.
If you want to use GPT2 you should be defining model as
model = GPT2LMHeadModel.from_pretrained('gpt2')
Are you ensuring your model is saved properly?
model.save_pretrained('saveDir')
?
Note that you can use the same save name as the model name,
if you do that when you load the model from pretrained the script will first look locally for the directory of the same name and if it doesn't find it, it will download from huggingface.
expample:
`
#generally it's better to use specific model head classes, but this works
from transformers import AutoModel, AutoTokenizer
#the first time this runs it will download and save the model
#every next time it will load the saved model if it can find it in the same dir
model_name = 'gpt2'
model = AutoModel.from_pretrained(model_name)
model.save_pretrained(model_name)
`
Maybe this would be helpful.